This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-01-16 02:13
Elapsed1h29m
Revisionmaster
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/a849f0f4-29ca-4dab-a0e4-c970b5dbb1c1/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/a849f0f4-29ca-4dab-a0e4-c970b5dbb1c1/targets/test

No Test Failures!


Error lines from build-log.txt

... skipping 459 lines ...
Project: k8s-jkns-gci-gce-serial-1-2
Network Project: k8s-jkns-gci-gce-serial-1-2
Zone: us-west1-b
INSTANCE_GROUPS=
NODE_NAMES=
Bringing down cluster
ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
 - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.

Deleting firewall rules remaining in network bootstrap-e2e: 
W0116 03:02:23.989779  105918 loader.go:223] Config not found: /workspace/.kube/config
... skipping 144 lines ...
Trying to find master named 'bootstrap-e2e-master'
Looking for address 'bootstrap-e2e-master-ip'
Using master: bootstrap-e2e-master (external IP: 34.83.18.233; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

.............Kubernetes cluster created.
Cluster "k8s-jkns-gci-gce-serial-1-2_bootstrap-e2e" set.
User "k8s-jkns-gci-gce-serial-1-2_bootstrap-e2e" set.
Context "k8s-jkns-gci-gce-serial-1-2_bootstrap-e2e" created.
Switched to context "k8s-jkns-gci-gce-serial-1-2_bootstrap-e2e".
... skipping 27 lines ...
bootstrap-e2e-master              Ready,SchedulingDisabled   <none>   26s   v1.18.0-alpha.1.789+5d1c3016103d83
bootstrap-e2e-minion-group-1s6w   Ready                      <none>   20s   v1.18.0-alpha.1.789+5d1c3016103d83
bootstrap-e2e-minion-group-5wn8   Ready                      <none>   20s   v1.18.0-alpha.1.789+5d1c3016103d83
bootstrap-e2e-minion-group-7htw   Ready                      <none>   20s   v1.18.0-alpha.1.789+5d1c3016103d83
bootstrap-e2e-minion-group-dwjn   Ready                      <none>   20s   v1.18.0-alpha.1.789+5d1c3016103d83
Validate output:
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
Cluster validation succeeded
Done, listing cluster services:
... skipping 77 lines ...
Copying 'kube-apiserver.log kube-apiserver-audit.log kube-scheduler.log kube-controller-manager.log etcd.log etcd-events.log glbc.log cluster-autoscaler.log kube-addon-manager.log fluentd.log kubelet.cov startupscript.log' from bootstrap-e2e-master

Specify --start=46922 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts/before'
Detecting nodes in the cluster
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
... skipping 15 lines ...
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/kubelet.cov.tmp: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=bootstrap-e2e-minion-group
NODE_NAMES=bootstrap-e2e-minion-group-1s6w bootstrap-e2e-minion-group-5wn8 bootstrap-e2e-minion-group-7htw bootstrap-e2e-minion-group-dwjn
Failures for bootstrap-e2e-minion-group (if any):
2020/01/16 03:09:45 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts/before' finished in 2m9.670209366s
2020/01/16 03:09:45 process.go:153: Running: ./hack/e2e-internal/e2e-status.sh
Project: k8s-jkns-gci-gce-serial-1-2
... skipping 109 lines ...
[sig-storage] CSI Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver "csi-hostpath" does not support topology - skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:96
------------------------------
... skipping 716 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver hostPath doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 147 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:10:06.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","total":-1,"completed":1,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:10:06.621: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 217 lines ...
Jan 16 03:10:06.882: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled.
Jan 16 03:10:07.042: INFO: Found ClusterRoles; assuming RBAC is enabled.
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-7242
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Jan 16 03:10:07.291: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:10:11.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7242" for this suite.


• [SLOW TEST:5.963 seconds]
[k8s.io] InitContainer [NodeConformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:10:12.421: INFO: Driver local doesn't support ext4 -- skipping
... skipping 44 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command that always fails in a pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:10:12.831: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 61 lines ...
• [SLOW TEST:7.876 seconds]
[sig-auth] ServiceAccounts
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:10:14.341: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 60 lines ...
• [SLOW TEST:7.363 seconds]
[sig-api-machinery] Servers with support for Table transformation
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should return a 406 for a backend which does not implement metadata [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:10:15.638: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:10:15.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 48 lines ...
• [SLOW TEST:9.218 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:105
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":1,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:10:15.683: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 151 lines ...
• [SLOW TEST:11.163 seconds]
[sig-storage] Projected configMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:10:17.614: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:10:17.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 49 lines ...
• [SLOW TEST:11.809 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:10:18.292: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:10:18.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 112 lines ...
• [SLOW TEST:11.987 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should prevent NodePort collisions
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1752
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":1,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:10:18.494: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:10:18.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 91 lines ...
• [SLOW TEST:13.083 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:10:19.585: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 78 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:43
    volume on tmpfs should have the correct mode using FSGroup
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:70
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":1,"skipped":7,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:10:19.728: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 301 lines ...
• [SLOW TEST:18.924 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:10:25.438: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 50 lines ...
[It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:211
Jan 16 03:10:18.908: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-b053c1d7-e531-44a9-8194-54f22ca0bc47" in namespace "security-context-test-8540" to be "success or failure"
Jan 16 03:10:19.034: INFO: Pod "busybox-readonly-true-b053c1d7-e531-44a9-8194-54f22ca0bc47": Phase="Pending", Reason="", readiness=false. Elapsed: 125.302481ms
Jan 16 03:10:21.301: INFO: Pod "busybox-readonly-true-b053c1d7-e531-44a9-8194-54f22ca0bc47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.392594536s
Jan 16 03:10:23.572: INFO: Pod "busybox-readonly-true-b053c1d7-e531-44a9-8194-54f22ca0bc47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.663742395s
Jan 16 03:10:25.695: INFO: Pod "busybox-readonly-true-b053c1d7-e531-44a9-8194-54f22ca0bc47": Phase="Failed", Reason="", readiness=false. Elapsed: 6.786564591s
Jan 16 03:10:25.695: INFO: Pod "busybox-readonly-true-b053c1d7-e531-44a9-8194-54f22ca0bc47" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:10:25.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8540" for this suite.

... skipping 55 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [sig-windows] Windows volume mounts 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28
Jan 16 03:10:28.762: INFO: Only supported for node OS distro [windows] (not gci)
[AfterEach] [sig-windows] Windows volume mounts 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:10:28.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 112 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1156
    should check if kubectl describe prints relevant information for cronjob
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1260
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":2,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 27 lines ...
• [SLOW TEST:17.977 seconds]
[sig-storage] Projected secret
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":21,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:10:30.830: INFO: Only supported for providers [azure] (not gce)
... skipping 270 lines ...
• [SLOW TEST:25.532 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: maxUnavailable deny evictions, integer => should not allow an eviction
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:151
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer =\u003e should not allow an eviction","total":-1,"completed":1,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 99 lines ...
• [SLOW TEST:18.062 seconds]
[sig-auth] ServiceAccounts
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:10:33.766: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 162 lines ...
• [SLOW TEST:10.023 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:10:38.807: INFO: >>> kubeConfig: /workspace/.kube/config
[It] watch and report errors with accept "application/vnd.kubernetes.protobuf"
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/protocol.go:45
Jan 16 03:10:38.809: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-api-machinery] client-go should negotiate
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:10:39.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":3,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:10:39.590: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:10:39.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 80 lines ...
• [SLOW TEST:35.035 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 33 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":2,"skipped":5,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:10:42.276: INFO: Only supported for providers [aws] (not gce)
... skipping 31 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:10:44.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-6326" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector","total":-1,"completed":3,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:10:45.139: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 37 lines ...
Jan 16 03:10:23.020: INFO: Waiting for PV local-pvndtwl to bind to PVC pvc-7nm4b
Jan 16 03:10:23.020: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-7nm4b] to have phase Bound
Jan 16 03:10:23.298: INFO: PersistentVolumeClaim pvc-7nm4b found but phase is Pending instead of Bound.
Jan 16 03:10:25.388: INFO: PersistentVolumeClaim pvc-7nm4b found and phase=Bound (2.367835181s)
Jan 16 03:10:25.388: INFO: Waiting up to 3m0s for PersistentVolume local-pvndtwl to have phase Bound
Jan 16 03:10:25.532: INFO: PersistentVolume local-pvndtwl found and phase=Bound (143.934424ms)
[It] should fail scheduling due to different NodeAffinity
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:360
STEP: local-volume-type: dir
STEP: Initializing test volumes
Jan 16 03:10:25.792: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-8ae24850-2fe2-407d-9a80-babc57d00554] Namespace:persistent-local-volumes-test-6257 PodName:hostexec-bootstrap-e2e-minion-group-1s6w-5cnl6 ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jan 16 03:10:25.792: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Creating local PVCs and PVs
... skipping 29 lines ...

• [SLOW TEST:43.875 seconds]
[sig-storage] PersistentVolumes-local 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:338
    should fail scheduling due to different NodeAffinity
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:360
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:10:50.332: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 148 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":2,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:10:51.188: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 6 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 69 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:525
    should support port-forward
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:752
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support port-forward","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:10:53.687: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 134 lines ...
• [SLOW TEST:34.932 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":2,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:10:54.700: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:10:54.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 100 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:10:54.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8070" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":3,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 42 lines ...
• [SLOW TEST:49.072 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:10:55.611: INFO: Only supported for providers [aws] (not gce)
... skipping 92 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:10:55.399: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-4525
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name secret-emptykey-test-bbc9b1ba-4456-48c2-b24a-cb745ad8f661
[AfterEach] [sig-api-machinery] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:10:57.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4525" for this suite.
... skipping 61 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:10:59.124: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:10:59.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 71 lines ...
Jan 16 03:10:06.829: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled.
Jan 16 03:10:06.986: INFO: Found ClusterRoles; assuming RBAC is enabled.
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-4409
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Jan 16 03:10:07.236: INFO: PodSpec: initContainers in spec.initContainers
Jan 16 03:10:59.380: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ef10e308-115c-4a8e-b8a6-c68330e65207", GenerateName:"", Namespace:"init-container-4409", SelfLink:"/api/v1/namespaces/init-container-4409/pods/pod-init-ef10e308-115c-4a8e-b8a6-c68330e65207", UID:"ae5e8d4c-5d25-4fc2-9826-3f31caddc3fa", ResourceVersion:"2931", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63714741007, loc:(*time.Location)(0x7bb6e80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"236506948"}, Annotations:map[string]string{"kubernetes.io/psp":"e2e-test-privileged-psp"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-kgbkx", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00167eac0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-kgbkx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-kgbkx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-kgbkx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000403bf0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"bootstrap-e2e-minion-group-5wn8", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0022554a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000403c70)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000403cf0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000403cf8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000403cfc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714741007, loc:(*time.Location)(0x7bb6e80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714741007, loc:(*time.Location)(0x7bb6e80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714741007, loc:(*time.Location)(0x7bb6e80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714741007, loc:(*time.Location)(0x7bb6e80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.5", PodIP:"10.64.4.3", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.64.4.3"}}, StartTime:(*v1.Time)(0xc001a79ca0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001a79ce0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002ca5810)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"docker://dd1a462699e3aa654001425317e282b3a6ec1dbd7b14d1c8773abbc6fe1a0bf9", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001a79dc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001a79cc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc000403e1f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:10:59.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4409" for this suite.


• [SLOW TEST:53.540 seconds]
[k8s.io] InitContainer [NodeConformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
... skipping 52 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (ext4)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Zone Support
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 165 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 31 lines ...
• [SLOW TEST:28.996 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":32,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 73 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume at the same time
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:243
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:11:05.532: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:11:05.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 49 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 57 lines ...
• [SLOW TEST:24.489 seconds]
[sig-storage] HostPath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should support subPath [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:91
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":2,"skipped":9,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 29 lines ...
• [SLOW TEST:16.120 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:11:06.493: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 107 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:163

      Driver "nfs" does not support topology - skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:96
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":4,"skipped":9,"failed":0}
[BeforeEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:10:58.108: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-3651
... skipping 121 lines ...
Jan 16 03:10:54.089: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.18.233 --kubeconfig=/workspace/.kube/config explain e2e-test-crd-publish-openapi-1776-crds.spec'
Jan 16 03:10:54.854: INFO: stderr: ""
Jan 16 03:10:54.854: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1776-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jan 16 03:10:54.855: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.18.233 --kubeconfig=/workspace/.kube/config explain e2e-test-crd-publish-openapi-1776-crds.spec.bars'
Jan 16 03:10:55.576: INFO: stderr: ""
Jan 16 03:10:55.576: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1776-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jan 16 03:10:55.576: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.18.233 --kubeconfig=/workspace/.kube/config explain e2e-test-crd-publish-openapi-1776-crds.spec.bars2'
Jan 16 03:10:56.428: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:11:09.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6066" for this suite.
... skipping 2 lines ...
• [SLOW TEST:39.185 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":8,"failed":0}
[BeforeEach] [sig-storage] Ephemeralstorage
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:10:26.165: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename pv
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pv-9513
... skipping 16 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : secret
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":3,"skipped":8,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:11:10.493: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 84 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and write from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:233
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":6,"failed":0}

SSSS
------------------------------
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:10.423 seconds]
[k8s.io] InitContainer [NodeConformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should invoke init containers on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":3,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:11:15.088: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:11:15.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 10 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":3,"skipped":56,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:11:10.073: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-4925
... skipping 75 lines ...
• [SLOW TEST:70.211 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:11:18.037: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:11:18.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 75 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 53 lines ...
Jan 16 03:11:08.262: INFO: Pod exec-volume-test-preprovisionedpv-l2fr no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-l2fr
Jan 16 03:11:08.262: INFO: Deleting pod "exec-volume-test-preprovisionedpv-l2fr" in namespace "volume-9671"
STEP: Deleting pv and pvc
Jan 16 03:11:08.597: INFO: Deleting PersistentVolumeClaim "pvc-c5m6k"
Jan 16 03:11:09.020: INFO: Deleting PersistentVolume "gcepd-xkj79"
Jan 16 03:11:10.396: INFO: error deleting PD "bootstrap-e2e-b47e521a-15d5-4880-87f1-53dc9d4c7410": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-serial-1-2/zones/us-west1-b/disks/bootstrap-e2e-b47e521a-15d5-4880-87f1-53dc9d4c7410' is already being used by 'projects/k8s-jkns-gci-gce-serial-1-2/zones/us-west1-b/instances/bootstrap-e2e-minion-group-7htw', resourceInUseByAnotherResource
Jan 16 03:11:10.396: INFO: Couldn't delete PD "bootstrap-e2e-b47e521a-15d5-4880-87f1-53dc9d4c7410", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-serial-1-2/zones/us-west1-b/disks/bootstrap-e2e-b47e521a-15d5-4880-87f1-53dc9d4c7410' is already being used by 'projects/k8s-jkns-gci-gce-serial-1-2/zones/us-west1-b/instances/bootstrap-e2e-minion-group-7htw', resourceInUseByAnotherResource
Jan 16 03:11:17.680: INFO: Successfully deleted PD "bootstrap-e2e-b47e521a-15d5-4880-87f1-53dc9d4c7410".
Jan 16 03:11:17.680: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:11:17.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-9671" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 24 lines ...
• [SLOW TEST:14.912 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:11:18.699: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 15 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":4,"skipped":56,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:11:16.746: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 71 lines ...
• [SLOW TEST:13.438 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":4,"skipped":12,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:11:23.941: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 116 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:11:24.275: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 117 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228

      Only supported for providers [azure] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1512
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:11:00.000: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-4830
... skipping 91 lines ...
• [SLOW TEST:30.087 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:11:32.624: INFO: Driver nfs doesn't support ntfs -- skipping
... skipping 133 lines ...
• [SLOW TEST:48.226 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":4,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:11:33.373: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:11:33.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 51 lines ...
• [SLOW TEST:28.559 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:72
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 93 lines ...
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nlnnq webserver-deployment-c7997dcc8- deployment-1203 /api/v1/namespaces/deployment-1203/pods/webserver-deployment-c7997dcc8-nlnnq b63b9943-c1a0-4661-b15d-d0dc65ab0982 4010 0 2020-01-16 03:11:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 215972b5-111e-4b3b-85c6-d213bd7abb5e 0xc002d74d60 0xc002d74d61}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-trmm7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-trmm7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-trmm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-5wn8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 03:11:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 16 03:11:36.155: INFO: Pod "webserver-deployment-c7997dcc8-r56ls" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-r56ls webserver-deployment-c7997dcc8- deployment-1203 /api/v1/namespaces/deployment-1203/pods/webserver-deployment-c7997dcc8-r56ls c0b59203-c85e-48f8-84f3-e6d423728d75 4016 0 2020-01-16 03:11:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 215972b5-111e-4b3b-85c6-d213bd7abb5e 0xc002d74eb0 0xc002d74eb1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-trmm7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-trmm7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-trmm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-1s6w,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 03:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 16 03:11:36.155: INFO: Pod "webserver-deployment-c7997dcc8-rwhcq" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rwhcq webserver-deployment-c7997dcc8- deployment-1203 /api/v1/namespaces/deployment-1203/pods/webserver-deployment-c7997dcc8-rwhcq abefc4c1-65ab-4992-ab39-8b6076dc9a91 4200 0 2020-01-16 03:11:32 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 215972b5-111e-4b3b-85c6-d213bd7abb5e 0xc002d74fc0 0xc002d74fc1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-trmm7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-trmm7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-trmm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-7htw,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 03:11:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 16 03:11:36.155: INFO: Pod "webserver-deployment-c7997dcc8-tws4q" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tws4q webserver-deployment-c7997dcc8- deployment-1203 /api/v1/namespaces/deployment-1203/pods/webserver-deployment-c7997dcc8-tws4q fb52e1fa-2565-409c-8fa2-a01be80636a2 4239 0 2020-01-16 03:11:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 215972b5-111e-4b3b-85c6-d213bd7abb5e 0xc002d750e0 0xc002d750e1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-trmm7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-trmm7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-trmm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-1s6w,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 03:11:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 03:11:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 03:11:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 03:11:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.3,PodIP:10.64.3.42,StartTime:2020-01-16 03:11:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: pull access denied for webserver, repository does not exist or may require 'docker login': denied: requested access to the resource is denied,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.3.42,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 16 03:11:36.155: INFO: Pod "webserver-deployment-c7997dcc8-vlcg2" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vlcg2 webserver-deployment-c7997dcc8- deployment-1203 /api/v1/namespaces/deployment-1203/pods/webserver-deployment-c7997dcc8-vlcg2 7b6b72b7-e42a-42e0-8071-482bd496bf2a 4176 0 2020-01-16 03:11:32 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 215972b5-111e-4b3b-85c6-d213bd7abb5e 0xc002d752a0 0xc002d752a1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-trmm7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-trmm7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-trmm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-dwjn,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 03:11:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 16 03:11:36.156: INFO: Pod "webserver-deployment-c7997dcc8-vt98l" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vt98l webserver-deployment-c7997dcc8- deployment-1203 /api/v1/namespaces/deployment-1203/pods/webserver-deployment-c7997dcc8-vt98l 4307653b-6b99-4a91-9acc-eaab869f77e0 4198 0 2020-01-16 03:11:32 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 215972b5-111e-4b3b-85c6-d213bd7abb5e 0xc002d753d0 0xc002d753d1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-trmm7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-trmm7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-trmm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-7htw,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 03:11:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:62.968 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:11:36.751: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:11:36.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 180 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:11:43.051: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 74 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1859
    should create a pod from an image when restart is Never  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:11:09.088: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-9613
... skipping 35 lines ...
• [SLOW TEST:38.831 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":2,"skipped":0,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:11:47.938: INFO: Only supported for providers [azure] (not gce)
... skipping 145 lines ...
• [SLOW TEST:43.556 seconds]
[sig-api-machinery] Servers with support for API chunking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should return chunks of results for list calls
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/chunking.go:77
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","total":-1,"completed":2,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 39 lines ...
• [SLOW TEST:17.348 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":4,"skipped":37,"failed":0}

SSSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [k8s.io] [sig-node] kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:10:36.709: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename kubelet
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-3886
... skipping 104 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  [k8s.io] [sig-node] Clean up pods on node
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    kubelet should be able to delete 10 pods per node in 1m0s.
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:340
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] kubelet [k8s.io] [sig-node] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":2,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:11:54.276: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 152 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":2,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:11:56.243: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:11:56.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 82 lines ...
• [SLOW TEST:38.644 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  pod should support shared volumes between containers [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":4,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:11:57.354: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:11:57.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 104 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":3,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:11:59.408: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:11:59.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 88 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":30,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 34 lines ...
• [SLOW TEST:65.870 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":2,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:12:01.519: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 48 lines ...
• [SLOW TEST:16.857 seconds]
[k8s.io] [sig-node] Security Context
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support container.SecurityContext.RunAsUser [LinuxOnly]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:102
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":5,"skipped":43,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:12:11.006: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 42 lines ...
• [SLOW TEST:23.748 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":3,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:12:11.525: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:12:11.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 105 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:12:12.766: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 229 lines ...
Jan 16 03:11:43.505: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(gcepd) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-expand-5493-gcepd-sczs7s7
STEP: creating a claim
Jan 16 03:11:43.871: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Expanding non-expandable pvc
Jan 16 03:11:44.701: INFO: currentPvcSize {{5368709120 0} {<nil>} 5Gi BinarySI}, newSize {{6442450944 0} {<nil>}  BinarySI}
Jan 16 03:11:45.798: INFO: Error updating pvc gcepdc2x2j: PersistentVolumeClaim "gcepdc2x2j" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:11:48.130: INFO: Error updating pvc gcepdc2x2j: PersistentVolumeClaim "gcepdc2x2j" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:11:50.678: INFO: Error updating pvc gcepdc2x2j: PersistentVolumeClaim "gcepdc2x2j" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:11:52.589: INFO: Error updating pvc gcepdc2x2j: PersistentVolumeClaim "gcepdc2x2j" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:11:54.319: INFO: Error updating pvc gcepdc2x2j: PersistentVolumeClaim "gcepdc2x2j" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:11:56.314: INFO: Error updating pvc gcepdc2x2j: PersistentVolumeClaim "gcepdc2x2j" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:11:59.124: INFO: Error updating pvc gcepdc2x2j: PersistentVolumeClaim "gcepdc2x2j" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:12:00.174: INFO: Error updating pvc gcepdc2x2j: PersistentVolumeClaim "gcepdc2x2j" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:12:02.442: INFO: Error updating pvc gcepdc2x2j: PersistentVolumeClaim "gcepdc2x2j" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:12:04.745: INFO: Error updating pvc gcepdc2x2j: PersistentVolumeClaim "gcepdc2x2j" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:12:06.288: INFO: Error updating pvc gcepdc2x2j: PersistentVolumeClaim "gcepdc2x2j" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:12:08.243: INFO: Error updating pvc gcepdc2x2j: PersistentVolumeClaim "gcepdc2x2j" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:12:10.023: INFO: Error updating pvc gcepdc2x2j: PersistentVolumeClaim "gcepdc2x2j" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:12:12.290: INFO: Error updating pvc gcepdc2x2j: PersistentVolumeClaim "gcepdc2x2j" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:12:13.986: INFO: Error updating pvc gcepdc2x2j: PersistentVolumeClaim "gcepdc2x2j" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:12:16.731: INFO: Error updating pvc gcepdc2x2j: PersistentVolumeClaim "gcepdc2x2j" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:12:17.377: INFO: Error updating pvc gcepdc2x2j: PersistentVolumeClaim "gcepdc2x2j" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
STEP: Deleting pvc
Jan 16 03:12:17.377: INFO: Deleting PersistentVolumeClaim "gcepdc2x2j"
STEP: Deleting sc
Jan 16 03:12:19.553: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 97 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":2,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:12:22.106: INFO: Driver cinder doesn't support ntfs -- skipping
... skipping 76 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:11:03.019: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 29 lines ...
STEP: Deleting pod hostexec-bootstrap-e2e-minion-group-5wn8-wwtlb in namespace volumemode-9902
Jan 16 03:12:05.956: INFO: Deleting pod "security-context-63663c1e-2584-4e01-9cb1-946db22dc610" in namespace "volumemode-9902"
Jan 16 03:12:06.289: INFO: Wait up to 5m0s for pod "security-context-63663c1e-2584-4e01-9cb1-946db22dc610" to be fully deleted
STEP: Deleting pv and pvc
Jan 16 03:12:14.690: INFO: Deleting PersistentVolumeClaim "pvc-rzfhb"
Jan 16 03:12:14.880: INFO: Deleting PersistentVolume "gcepd-pqm8m"
Jan 16 03:12:16.217: INFO: error deleting PD "bootstrap-e2e-ccf846a5-fc20-4275-8883-1746e071fdd0": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-serial-1-2/zones/us-west1-b/disks/bootstrap-e2e-ccf846a5-fc20-4275-8883-1746e071fdd0' is already being used by 'projects/k8s-jkns-gci-gce-serial-1-2/zones/us-west1-b/instances/bootstrap-e2e-minion-group-5wn8', resourceInUseByAnotherResource
Jan 16 03:12:16.217: INFO: Couldn't delete PD "bootstrap-e2e-ccf846a5-fc20-4275-8883-1746e071fdd0", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-serial-1-2/zones/us-west1-b/disks/bootstrap-e2e-ccf846a5-fc20-4275-8883-1746e071fdd0' is already being used by 'projects/k8s-jkns-gci-gce-serial-1-2/zones/us-west1-b/instances/bootstrap-e2e-minion-group-5wn8', resourceInUseByAnotherResource
Jan 16 03:12:23.384: INFO: Successfully deleted PD "bootstrap-e2e-ccf846a5-fc20-4275-8883-1746e071fdd0".
Jan 16 03:12:23.384: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:12:23.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volumemode-9902" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":5,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:12:24.411: INFO: Driver azure-disk doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:12:24.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 40 lines ...
• [SLOW TEST:27.162 seconds]
[sig-apps] ReplicationController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a private image
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:53
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a private image","total":-1,"completed":4,"skipped":26,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:12:26.584: INFO: Driver local doesn't support ext3 -- skipping
... skipping 138 lines ...
• [SLOW TEST:15.548 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":6,"skipped":59,"failed":0}

SSSSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":2,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:11:28.593: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 64 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 27 lines ...
• [SLOW TEST:17.690 seconds]
[k8s.io] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow substituting values in a volume subpath [sig-storage]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:161
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage]","total":-1,"completed":4,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:12:29.232: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 112 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":4,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:12:31.858: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 6 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 83 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:12:28.599: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in topology-4403
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191
Jan 16 03:12:30.951: INFO: found topology map[failure-domain.beta.kubernetes.io/zone:us-west1-b]
Jan 16 03:12:31.722: INFO: Node name not specified for getVolumeOpCounts, falling back to listing nodes from API Server
Jan 16 03:12:34.564: INFO: Node name not specified for getVolumeOpCounts, falling back to listing nodes from API Server
Jan 16 03:12:36.825: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
... skipping 9 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Not enough topologies in cluster -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:198
------------------------------
... skipping 153 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:12:40.283: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:12:40.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 74 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":3,"skipped":31,"failed":0}
[BeforeEach] [sig-storage] Flexvolumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:12:20.978: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename flexvolume
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in flexvolume-3608
... skipping 99 lines ...
Jan 16 03:12:32.828: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: cleaning the environment after flex
Jan 16 03:12:33.810: INFO: Deleting pod "flex-client" in namespace "flexvolume-3608"
Jan 16 03:12:33.933: INFO: Wait up to 5m0s for pod "flex-client" to be fully deleted
STEP: waiting for flex client pod to terminate
Jan 16 03:12:40.187: INFO: Waiting up to 5m0s for pod "flex-client" in namespace "flexvolume-3608" to be "terminated due to deadline exceeded"
Jan 16 03:12:40.389: INFO: Pod "flex-client" in namespace "flexvolume-3608" not found. Error: pods "flex-client" not found
STEP: uninstalling flexvolume dummy-flexvolume-3608 from node bootstrap-e2e-minion-group-5wn8
Jan 16 03:12:40.389: INFO: Getting external IP address for bootstrap-e2e-minion-group-5wn8
Jan 16 03:12:40.912: INFO: ssh prow@35.227.137.215:22: command:   sudo rm -r /home/kubernetes/flexvolume/k8s~dummy-flexvolume-3608
Jan 16 03:12:40.912: INFO: ssh prow@35.227.137.215:22: stdout:    ""
Jan 16 03:12:40.912: INFO: ssh prow@35.227.137.215:22: stderr:    ""
Jan 16 03:12:40.912: INFO: ssh prow@35.227.137.215:22: exit code: 0
... skipping 42 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:43
    volume on default medium should have the correct mode using FSGroup
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:66
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":5,"skipped":9,"failed":0}
[BeforeEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:11:09.081: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-2062
... skipping 16 lines ...
• [SLOW TEST:93.043 seconds]
[sig-storage] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":9,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:12:42.133: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 41 lines ...
Jan 16 03:12:20.789: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7110.svc.cluster.local from pod dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d: the server could not find the requested resource (get pods dns-test-5fd2515c-0382-4664-be55-a042fdac515d)
Jan 16 03:12:21.018: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7110.svc.cluster.local from pod dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d: the server could not find the requested resource (get pods dns-test-5fd2515c-0382-4664-be55-a042fdac515d)
Jan 16 03:12:21.717: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7110.svc.cluster.local from pod dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d: the server could not find the requested resource (get pods dns-test-5fd2515c-0382-4664-be55-a042fdac515d)
Jan 16 03:12:22.005: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7110.svc.cluster.local from pod dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d: the server could not find the requested resource (get pods dns-test-5fd2515c-0382-4664-be55-a042fdac515d)
Jan 16 03:12:22.183: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7110.svc.cluster.local from pod dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d: the server could not find the requested resource (get pods dns-test-5fd2515c-0382-4664-be55-a042fdac515d)
Jan 16 03:12:22.428: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7110.svc.cluster.local from pod dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d: the server could not find the requested resource (get pods dns-test-5fd2515c-0382-4664-be55-a042fdac515d)
Jan 16 03:12:22.941: INFO: Lookups using dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7110.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7110.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7110.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7110.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7110.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7110.svc.cluster.local jessie_udp@dns-test-service-2.dns-7110.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7110.svc.cluster.local]

Jan 16 03:12:28.368: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7110.svc.cluster.local from pod dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d: the server could not find the requested resource (get pods dns-test-5fd2515c-0382-4664-be55-a042fdac515d)
Jan 16 03:12:28.581: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7110.svc.cluster.local from pod dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d: the server could not find the requested resource (get pods dns-test-5fd2515c-0382-4664-be55-a042fdac515d)
Jan 16 03:12:28.839: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7110.svc.cluster.local from pod dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d: the server could not find the requested resource (get pods dns-test-5fd2515c-0382-4664-be55-a042fdac515d)
Jan 16 03:12:29.089: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7110.svc.cluster.local from pod dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d: the server could not find the requested resource (get pods dns-test-5fd2515c-0382-4664-be55-a042fdac515d)
Jan 16 03:12:29.808: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7110.svc.cluster.local from pod dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d: the server could not find the requested resource (get pods dns-test-5fd2515c-0382-4664-be55-a042fdac515d)
Jan 16 03:12:30.085: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7110.svc.cluster.local from pod dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d: the server could not find the requested resource (get pods dns-test-5fd2515c-0382-4664-be55-a042fdac515d)
Jan 16 03:12:30.526: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7110.svc.cluster.local from pod dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d: the server could not find the requested resource (get pods dns-test-5fd2515c-0382-4664-be55-a042fdac515d)
Jan 16 03:12:31.008: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7110.svc.cluster.local from pod dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d: the server could not find the requested resource (get pods dns-test-5fd2515c-0382-4664-be55-a042fdac515d)
Jan 16 03:12:31.688: INFO: Lookups using dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7110.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7110.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7110.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7110.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7110.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7110.svc.cluster.local jessie_udp@dns-test-service-2.dns-7110.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7110.svc.cluster.local]

Jan 16 03:12:33.115: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7110.svc.cluster.local from pod dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d: the server could not find the requested resource (get pods dns-test-5fd2515c-0382-4664-be55-a042fdac515d)
Jan 16 03:12:33.234: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7110.svc.cluster.local from pod dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d: the server could not find the requested resource (get pods dns-test-5fd2515c-0382-4664-be55-a042fdac515d)
Jan 16 03:12:33.349: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7110.svc.cluster.local from pod dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d: the server could not find the requested resource (get pods dns-test-5fd2515c-0382-4664-be55-a042fdac515d)
Jan 16 03:12:33.484: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7110.svc.cluster.local from pod dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d: the server could not find the requested resource (get pods dns-test-5fd2515c-0382-4664-be55-a042fdac515d)
Jan 16 03:12:33.701: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7110.svc.cluster.local from pod dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d: the server could not find the requested resource (get pods dns-test-5fd2515c-0382-4664-be55-a042fdac515d)
Jan 16 03:12:33.765: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7110.svc.cluster.local from pod dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d: the server could not find the requested resource (get pods dns-test-5fd2515c-0382-4664-be55-a042fdac515d)
Jan 16 03:12:33.839: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7110.svc.cluster.local from pod dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d: the server could not find the requested resource (get pods dns-test-5fd2515c-0382-4664-be55-a042fdac515d)
Jan 16 03:12:33.947: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7110.svc.cluster.local from pod dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d: the server could not find the requested resource (get pods dns-test-5fd2515c-0382-4664-be55-a042fdac515d)
Jan 16 03:12:34.168: INFO: Lookups using dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7110.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7110.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7110.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7110.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7110.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7110.svc.cluster.local jessie_udp@dns-test-service-2.dns-7110.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7110.svc.cluster.local]

Jan 16 03:12:40.695: INFO: DNS probes using dns-7110/dns-test-5fd2515c-0382-4664-be55-a042fdac515d succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 5 lines ...
• [SLOW TEST:42.516 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":1,"skipped":14,"failed":0}
[BeforeEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:12:22.346: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-5825
... skipping 26 lines ...
• [SLOW TEST:20.595 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:56
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:12:42.946: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 34 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:12:45.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2189" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes","total":-1,"completed":3,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:12:45.458: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:12:45.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 32 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152

      Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":3,"skipped":37,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:12:27.865: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-9762
... skipping 27 lines ...
• [SLOW TEST:19.087 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":4,"skipped":37,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:12:46.957: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 60 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:12:47.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1600" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":4,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:12:48.193: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:12:48.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 42 lines ...
• [SLOW TEST:11.016 seconds]
[k8s.io] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be updated [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":19,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
... skipping 77 lines ...
STEP: cleaning the environment after gcepd
Jan 16 03:12:25.216: INFO: Deleting pod "gcepd-client" in namespace "volume-9046"
Jan 16 03:12:25.429: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
STEP: Deleting pv and pvc
Jan 16 03:12:41.859: INFO: Deleting PersistentVolumeClaim "pvc-9k9s2"
Jan 16 03:12:42.152: INFO: Deleting PersistentVolume "gcepd-z7bf8"
Jan 16 03:12:43.595: INFO: error deleting PD "bootstrap-e2e-1b5b81aa-3dbc-4aea-a1cf-75e55cf3de83": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-serial-1-2/zones/us-west1-b/disks/bootstrap-e2e-1b5b81aa-3dbc-4aea-a1cf-75e55cf3de83' is already being used by 'projects/k8s-jkns-gci-gce-serial-1-2/zones/us-west1-b/instances/bootstrap-e2e-minion-group-7htw', resourceInUseByAnotherResource
Jan 16 03:12:43.595: INFO: Couldn't delete PD "bootstrap-e2e-1b5b81aa-3dbc-4aea-a1cf-75e55cf3de83", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-serial-1-2/zones/us-west1-b/disks/bootstrap-e2e-1b5b81aa-3dbc-4aea-a1cf-75e55cf3de83' is already being used by 'projects/k8s-jkns-gci-gce-serial-1-2/zones/us-west1-b/instances/bootstrap-e2e-minion-group-7htw', resourceInUseByAnotherResource
Jan 16 03:12:50.787: INFO: Successfully deleted PD "bootstrap-e2e-1b5b81aa-3dbc-4aea-a1cf-75e55cf3de83".
Jan 16 03:12:50.787: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:12:50.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-9046" for this suite.
... skipping 8 lines ...
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":2,"skipped":4,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:12:51.310: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 91 lines ...
• [SLOW TEST:89.458 seconds]
[sig-storage] Projected secret
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":57,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:12:51.377: INFO: Driver gluster doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:12:51.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 106 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:147

      Only supported for providers [openstack] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1080
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:12:51.381: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:12:51.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 262 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":4,"skipped":32,"failed":0}
[BeforeEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:12:42.316: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-614
... skipping 179 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when creating containers with AllowPrivilegeEscalation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:290
    should allow privilege escalation when true [LinuxOnly] [NodeConformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:361
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":32,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":-1,"completed":2,"skipped":24,"failed":0}

S
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":7,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:12:51.632: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 15 lines ...
      Driver hostPath doesn't support PreprovisionedPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":5,"skipped":50,"failed":0}
[BeforeEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:12:42.112: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-2061
... skipping 22 lines ...
• [SLOW TEST:10.735 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":50,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 143 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume at the same time
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:243
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:12:52.976: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:12:52.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 161 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume at the same time
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:243
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:12:58.615: INFO: Only supported for providers [vsphere] (not gce)
... skipping 91 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and write from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:233
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":2,"skipped":5,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:13:03.314: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 119 lines ...
• [SLOW TEST:12.558 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":7,"skipped":51,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:13:05.418: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 87 lines ...
      Driver hostPath doesn't support PreprovisionedPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":2,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:93
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
... skipping 9 lines ...
Jan 16 03:12:34.226: INFO: Creating resource for dynamic PV
Jan 16 03:12:34.226: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(gcepd) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-expand-579-gcepd-scrzgdg
STEP: creating a claim
STEP: Expanding non-expandable pvc
Jan 16 03:12:34.602: INFO: currentPvcSize {{5368709120 0} {<nil>} 5Gi BinarySI}, newSize {{6442450944 0} {<nil>}  BinarySI}
Jan 16 03:12:34.728: INFO: Error updating pvc gcepd7j9ln: PersistentVolumeClaim "gcepd7j9ln" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:12:37.294: INFO: Error updating pvc gcepd7j9ln: PersistentVolumeClaim "gcepd7j9ln" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:12:39.175: INFO: Error updating pvc gcepd7j9ln: PersistentVolumeClaim "gcepd7j9ln" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:12:41.166: INFO: Error updating pvc gcepd7j9ln: PersistentVolumeClaim "gcepd7j9ln" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:12:43.286: INFO: Error updating pvc gcepd7j9ln: PersistentVolumeClaim "gcepd7j9ln" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:12:44.974: INFO: Error updating pvc gcepd7j9ln: PersistentVolumeClaim "gcepd7j9ln" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:12:47.314: INFO: Error updating pvc gcepd7j9ln: PersistentVolumeClaim "gcepd7j9ln" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:12:49.095: INFO: Error updating pvc gcepd7j9ln: PersistentVolumeClaim "gcepd7j9ln" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:12:51.280: INFO: Error updating pvc gcepd7j9ln: PersistentVolumeClaim "gcepd7j9ln" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:12:52.916: INFO: Error updating pvc gcepd7j9ln: PersistentVolumeClaim "gcepd7j9ln" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:12:55.093: INFO: Error updating pvc gcepd7j9ln: PersistentVolumeClaim "gcepd7j9ln" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:12:57.278: INFO: Error updating pvc gcepd7j9ln: PersistentVolumeClaim "gcepd7j9ln" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:12:59.138: INFO: Error updating pvc gcepd7j9ln: PersistentVolumeClaim "gcepd7j9ln" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:13:01.012: INFO: Error updating pvc gcepd7j9ln: PersistentVolumeClaim "gcepd7j9ln" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:13:03.310: INFO: Error updating pvc gcepd7j9ln: PersistentVolumeClaim "gcepd7j9ln" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:13:04.940: INFO: Error updating pvc gcepd7j9ln: PersistentVolumeClaim "gcepd7j9ln" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 03:13:05.102: INFO: Error updating pvc gcepd7j9ln: PersistentVolumeClaim "gcepd7j9ln" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
STEP: Deleting pvc
Jan 16 03:13:05.102: INFO: Deleting PersistentVolumeClaim "gcepd7j9ln"
STEP: Deleting sc
Jan 16 03:13:05.446: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 8 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:147
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":3,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 26 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with readOnlyRootFilesystem
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:165
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:13:10.051: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 153 lines ...
• [SLOW TEST:93.675 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  iterative rollouts should eventually progress
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:113
------------------------------
{"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":3,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:21.020 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":8,"skipped":21,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
... skipping 270 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":3,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:13:16.587: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:13:16.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 44 lines ...
• [SLOW TEST:12.616 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":17,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 75 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":5,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 24 lines ...
• [SLOW TEST:7.779 seconds]
[sig-storage] Projected secret
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":43,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:13:20.750: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:13:20.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 56 lines ...
STEP: Creating the service on top of the pods in kubernetes
Jan 16 03:12:38.624: INFO: Service node-port-service in namespace nettest-298 found.
Jan 16 03:12:39.758: INFO: Service session-affinity-service in namespace nettest-298 found.
STEP: dialing(udp) test-container-pod --> 10.0.98.152:90
Jan 16 03:12:40.090: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.2.65:8080/dial?request=hostName&protocol=udp&host=10.0.98.152&port=90&tries=1'] Namespace:nettest-298 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 03:12:40.090: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 03:12:46.097: INFO: Tries: 10, in try: 0, stdout: {"errors":["reading from udp connection failed. err:'read udp 10.64.2.65:33816-\u003e10.0.98.152:90: i/o timeout'"]}, stderr: , command run in: (*v1.Pod)(nil)
Jan 16 03:12:48.247: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.2.65:8080/dial?request=hostName&protocol=udp&host=10.0.98.152&port=90&tries=1'] Namespace:nettest-298 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 03:12:48.247: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 03:12:49.451: INFO: Tries: 10, in try: 1, stdout: {"responses":["netserver-3"]}, stderr: , command run in: (*v1.Pod)(nil)
Jan 16 03:12:51.540: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.2.65:8080/dial?request=hostName&protocol=udp&host=10.0.98.152&port=90&tries=1'] Namespace:nettest-298 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 03:12:51.540: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 03:12:52.236: INFO: Tries: 10, in try: 2, stdout: {"responses":["netserver-3"]}, stderr: , command run in: (*v1.Pod)(nil)
... skipping 29 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  Granular Checks: Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:161
    should function for client IP based session affinity: udp [LinuxOnly]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:282
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]","total":-1,"completed":3,"skipped":11,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:13:22.986: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 30 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 121 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":5,"skipped":40,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:13:23.690: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 59 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should provision a volume and schedule a pod with AllowedTopologies
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:163
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":5,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:13:23.974: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:13:23.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 222 lines ...
• [SLOW TEST:13.018 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update PodDisruptionBudget status
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:63
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should update PodDisruptionBudget status","total":-1,"completed":9,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:13:25.685: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:13:25.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 81 lines ...
• [SLOW TEST:9.683 seconds]
[sig-auth] Metadata Concealment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should run a check-metadata-concealment job to completion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/metadata_concealment.go:33
------------------------------
{"msg":"PASSED [sig-auth] Metadata Concealment should run a check-metadata-concealment job to completion","total":-1,"completed":4,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:13:26.276: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 129 lines ...
• [SLOW TEST:40.832 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:855
------------------------------
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":5,"skipped":45,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:13:27.810: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 128 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and read from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":5,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:13:29.532: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:13:29.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 52 lines ...
• [SLOW TEST:24.262 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":8,"skipped":65,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:13:29.702: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:13:29.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 184 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:13:32.636: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:13:32.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 88 lines ...
• [SLOW TEST:14.922 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:87
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":6,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:13:34.012: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:13:34.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 3 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Only supported for providers [vsphere] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1383
------------------------------
... skipping 184 lines ...
      Driver gluster doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":9,"skipped":69,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:13:32.124: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-7722
... skipping 11 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:13:35.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7722" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":10,"skipped":69,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:13:36.307: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:13:36.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 3 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 107 lines ...
• [SLOW TEST:8.212 seconds]
[sig-node] Downward API
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":53,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] [sig-node] AppArmor
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 24 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  load AppArmor profiles
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31
    should enforce an AppArmor profile
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile","total":-1,"completed":4,"skipped":27,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:13:42.983: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 58 lines ...
Jan 16 03:12:56.444: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-qrzjd] to have phase Bound
Jan 16 03:12:56.853: INFO: PersistentVolumeClaim pvc-qrzjd found but phase is Pending instead of Bound.
Jan 16 03:12:59.031: INFO: PersistentVolumeClaim pvc-qrzjd found and phase=Bound (2.587453553s)
Jan 16 03:12:59.031: INFO: Waiting up to 3m0s for PersistentVolume gce-265mj to have phase Bound
Jan 16 03:12:59.213: INFO: PersistentVolume gce-265mj found and phase=Bound (181.851955ms)
STEP: Creating the Client Pod
[It] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:124
STEP: Deleting the Claim
Jan 16 03:13:24.269: INFO: Deleting PersistentVolumeClaim "pvc-qrzjd"
STEP: Deleting the Pod
Jan 16 03:13:24.656: INFO: Deleting pod "pvc-tester-n8m4p" in namespace "pv-5698"
Jan 16 03:13:24.779: INFO: Wait up to 5m0s for pod "pvc-tester-n8m4p" to be fully deleted
... skipping 14 lines ...
Jan 16 03:13:44.526: INFO: Successfully deleted PD "bootstrap-e2e-a4c712ca-edbd-4bf5-999e-fd2902b8d72f".


• [SLOW TEST:53.106 seconds]
[sig-storage] PersistentVolumes GCEPD
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:124
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
... skipping 65 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy","total":-1,"completed":6,"skipped":22,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:12:56.531: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in csi-mock-volumes-8818
... skipping 42 lines ...
STEP: Deleting the previously created pod
Jan 16 03:13:32.523: INFO: Deleting pod "pvc-volume-tester-hdc78" in namespace "csi-mock-volumes-8818"
Jan 16 03:13:32.782: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hdc78" to be fully deleted
STEP: Checking CSI driver logs
Jan 16 03:13:37.319: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8818","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8818","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-c4d394bd-8985-4a07-be4f-e16c5d4fe1fa","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-c4d394bd-8985-4a07-be4f-e16c5d4fe1fa"}}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8818","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-8818","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-8818","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-c4d394bd-8985-4a07-be4f-e16c5d4fe1fa","storage.kubernetes.io/csiProvisionerIdentity":"1579144397138-8081-csi-mock-csi-mock-volumes-8818"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c4d394bd-8985-4a07-be4f-e16c5d4fe1fa/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-c4d394bd-8985-4a07-be4f-e16c5d4fe1fa","storage.kubernetes.io/csiProvisionerIdentity":"1579144397138-8081-csi-mock-csi-mock-volumes-8818"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c4d394bd-8985-4a07-be4f-e16c5d4fe1fa/globalmount","target_path":"/var/lib/kubelet/pods/3dc7ae46-b214-4040-9a93-576e34df3537/volumes/kubernetes.io~csi/pvc-c4d394bd-8985-4a07-be4f-e16c5d4fe1fa/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-c4d394bd-8985-4a07-be4f-e16c5d4fe1fa","storage.kubernetes.io/csiProvisionerIdentity":"1579144397138-8081-csi-mock-csi-mock-volumes-8818"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/3dc7ae46-b214-4040-9a93-576e34df3537/volumes/kubernetes.io~csi/pvc-c4d394bd-8985-4a07-be4f-e16c5d4fe1fa/mount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c4d394bd-8985-4a07-be4f-e16c5d4fe1fa/globalmount"},"Response":{},"Error":""}

Jan 16 03:13:37.319: INFO: Found NodeUnpublishVolume: {Method:/csi.v1.Node/NodeUnpublishVolume Request:{VolumeContext:map[]}}
STEP: Deleting pod pvc-volume-tester-hdc78
Jan 16 03:13:37.319: INFO: Deleting pod "pvc-volume-tester-hdc78" in namespace "csi-mock-volumes-8818"
STEP: Deleting claim pvc-tjffn
Jan 16 03:13:38.053: INFO: Waiting up to 2m0s for PersistentVolume pvc-c4d394bd-8985-4a07-be4f-e16c5d4fe1fa to get deleted
... skipping 65 lines ...
• [SLOW TEST:13.531 seconds]
[sig-apps] ReplicationController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":11,"skipped":71,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:13:49.845: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 78 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] GCP Volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 34 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:56
  GlusterFS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:124
    should be mountable
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:125
------------------------------
{"msg":"PASSED [sig-storage] GCP Volumes GlusterFS should be mountable","total":-1,"completed":5,"skipped":27,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:14.776 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:457
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage]","total":-1,"completed":7,"skipped":45,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:13:52.579: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 111 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":7,"skipped":55,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 20 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:13:53.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9058" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run CronJob should create a CronJob","total":-1,"completed":12,"skipped":73,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:13:53.417: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:13:53.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 10 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:214

      Driver csi-hostpath doesn't support ntfs -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:153
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":17,"failed":0}
[BeforeEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:13:25.112: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-2366
... skipping 37 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:13:54.489: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:13:54.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 46 lines ...
• [SLOW TEST:66.400 seconds]
[sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove from active list jobs that have been deleted
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:195
------------------------------
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":3,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:13:58.034: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:13:58.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 58 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":32,"failed":0}
[BeforeEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:13:31.395: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename services
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-9117
... skipping 76 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:43
    new files should be created with FSGroup ownership when container is root
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:50
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":6,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:14:01.607: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 15 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied.","total":-1,"completed":6,"skipped":35,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:13:03.612: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in csi-mock-volumes-8728
... skipping 85 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:240
    should preserve attachment policy when no CSIDriver present
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:262
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present","total":-1,"completed":7,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:14:01.953: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 35 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228

      Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:688
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":7,"skipped":32,"failed":0}
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:13:58.307: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-6773
... skipping 9 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:14:01.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6773" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":8,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 15 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:14:04.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-2792" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return pod details","total":-1,"completed":8,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:14:05.189: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:14:05.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 94 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":10,"skipped":30,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:14:06.079: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 151 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":6,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:14:07.086: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:14:07.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 167 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume at the same time
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:243
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:14:08.056: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 35 lines ...
Jan 16 03:14:10.590: INFO: AfterEach: Cleaning up test resources


S [SKIPPING] in Spec Setup (BeforeEach) [4.487 seconds]
[sig-storage] PersistentVolumes:vsphere
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on vspehre volume detach [BeforeEach]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:163

  Only supported for providers [vsphere] (not gce)

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:63
------------------------------
... skipping 91 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:121
    with multiple PVs and PVCs all in same ns
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:211
      should create 2 PVs and 4 PVCs: test write access
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access","total":-1,"completed":5,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:14:10.618: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:14:10.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 46 lines ...
• [SLOW TEST:10.643 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 115 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":6,"skipped":45,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:14:16.680: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:14:16.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 144 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (ext4)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ext4)] volumes should store data","total":-1,"completed":4,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:14:17.037: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:14:17.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 94 lines ...
Jan 16 03:13:06.752: INFO: creating *v1.StatefulSet: csi-mock-volumes-4492/csi-mockplugin
Jan 16 03:13:06.975: INFO: creating *v1beta1.CSIDriver: csi-mock-csi-mock-volumes-4492
Jan 16 03:13:07.569: INFO: creating *v1.StatefulSet: csi-mock-volumes-4492/csi-mockplugin-attacher
Jan 16 03:13:08.449: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4492"
STEP: Creating pod
STEP: checking for CSIInlineVolumes feature
Jan 16 03:13:44.337: INFO: Error getting logs for pod csi-inline-volume-lpwqv: the server rejected our request for an unknown reason (get pods csi-inline-volume-lpwqv)
STEP: Deleting pod csi-inline-volume-lpwqv in namespace csi-mock-volumes-4492
STEP: Deleting the previously created pod
Jan 16 03:13:50.890: INFO: Deleting pod "pvc-volume-tester-x87lb" in namespace "csi-mock-volumes-4492"
Jan 16 03:13:51.096: INFO: Wait up to 5m0s for pod "pvc-volume-tester-x87lb" to be fully deleted
STEP: Checking CSI driver logs
Jan 16 03:14:00.438: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4492","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4492","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4492","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-4492","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"csi-dd20f6483db255574375c67b011aad6f8d876ed4d6b25b90c275f8ae03d962eb","target_path":"/var/lib/kubelet/pods/2f72a1a5-0c04-46a5-8b23-a13e04ab8d71/volumes/kubernetes.io~csi/my-volume/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"true","csi.storage.k8s.io/pod.name":"pvc-volume-tester-x87lb","csi.storage.k8s.io/pod.namespace":"csi-mock-volumes-4492","csi.storage.k8s.io/pod.uid":"2f72a1a5-0c04-46a5-8b23-a13e04ab8d71","csi.storage.k8s.io/serviceAccount.name":"default"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-dd20f6483db255574375c67b011aad6f8d876ed4d6b25b90c275f8ae03d962eb","target_path":"/var/lib/kubelet/pods/2f72a1a5-0c04-46a5-8b23-a13e04ab8d71/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":""}

Jan 16 03:14:00.438: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Jan 16 03:14:00.438: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-x87lb
Jan 16 03:14:00.438: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-4492
Jan 16 03:14:00.438: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 2f72a1a5-0c04-46a5-8b23-a13e04ab8d71
Jan 16 03:14:00.438: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true
... skipping 38 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:296
    contain ephemeral=true when using inline volume
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:346
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":3,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:14:18.181: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:14:18.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 39 lines ...
• [SLOW TEST:7.434 seconds]
[sig-api-machinery] Generated clientset
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create v1beta1 cronJobs, delete cronJobs, watch cronJobs
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/generated_clientset.go:220
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes GCEPD should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach","total":-1,"completed":6,"skipped":77,"failed":0}
[BeforeEach] version v1
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:13:44.528: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-2391
... skipping 352 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":7,"skipped":77,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:14:20.254: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 96 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":7,"skipped":22,"failed":0}
[BeforeEach] [sig-storage] GCP Volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:13:46.401: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename gcp-volume
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gcp-volume-947
... skipping 33 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:56
  NFSv3
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:98
    should be mountable for NFSv3
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:99
------------------------------
{"msg":"PASSED [sig-storage] GCP Volumes NFSv3 should be mountable for NFSv3","total":-1,"completed":8,"skipped":22,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:14:24.540: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 81 lines ...
• [SLOW TEST:261.048 seconds]
[k8s.io] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:167
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:17.210 seconds]
[sig-node] Downward API
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":33,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:14:27.838: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 50 lines ...
• [SLOW TEST:17.486 seconds]
[sig-auth] PodSecurityPolicy
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should enforce the restricted policy.PodSecurityPolicy
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:84
------------------------------
{"msg":"PASSED [sig-auth] PodSecurityPolicy should enforce the restricted policy.PodSecurityPolicy","total":-1,"completed":11,"skipped":43,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:14:28.082: INFO: Driver cinder doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:14:28.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 86 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when running a container with a new image
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:263
      should be able to pull image [NodeConformance]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:374
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":7,"skipped":51,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:13.500 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:14:31.688: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 35 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:14:31.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-9193" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":12,"skipped":49,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 39 lines ...
• [SLOW TEST:40.387 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if deleteOptions.OrphanDependents is nil
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:438
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil","total":-1,"completed":8,"skipped":58,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 82 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":5,"skipped":34,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create v1beta1 cronJobs, delete cronJobs, watch cronJobs","total":-1,"completed":8,"skipped":26,"failed":0}
[BeforeEach] [k8s.io] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:14:19.695: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-7808
... skipping 22 lines ...
• [SLOW TEST:15.792 seconds]
[k8s.io] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":26,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:14:35.493: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 6 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 105 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:14:35.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podsecuritypolicy-9494" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] PodSecurityPolicy should forbid pod creation when no PSP is available","total":-1,"completed":9,"skipped":67,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:14:36.495: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:14:36.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 58 lines ...
• [SLOW TEST:5.386 seconds]
[sig-node] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should update ConfigMap successfully
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:137
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":8,"skipped":53,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:14:36.558: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:14:36.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 36 lines ...
      Driver emptydir doesn't support PreprovisionedPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl alpha client Kubectl run CronJob should create a CronJob","total":-1,"completed":8,"skipped":80,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:14:27.049: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 36 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":9,"skipped":80,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 68 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume one after the other
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":31,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:14:48.238: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 32 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: nfs]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver "nfs" does not support topology - skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:96
------------------------------
... skipping 44 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":2,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:12:40.346: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 49 lines ...
Jan 16 03:14:24.912: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.18.233 --kubeconfig=/workspace/.kube/config exec gcepd-client --namespace=volume-3704 -- grep  /opt/0  /proc/mounts'
Jan 16 03:14:27.482: INFO: stderr: ""
Jan 16 03:14:27.482: INFO: stdout: "/dev/sdb /opt/0 ext3 rw,relatime 0 0\n"
STEP: cleaning the environment after gcepd
Jan 16 03:14:27.482: INFO: Deleting pod "gcepd-client" in namespace "volume-3704"
Jan 16 03:14:27.831: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
Jan 16 03:14:35.401: INFO: error deleting PD "bootstrap-e2e-4d3bb92d-d2ff-4604-bcb5-6abc826e7543": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-serial-1-2/zones/us-west1-b/disks/bootstrap-e2e-4d3bb92d-d2ff-4604-bcb5-6abc826e7543' is already being used by 'projects/k8s-jkns-gci-gce-serial-1-2/zones/us-west1-b/instances/bootstrap-e2e-minion-group-5wn8', resourceInUseByAnotherResource
Jan 16 03:14:35.401: INFO: Couldn't delete PD "bootstrap-e2e-4d3bb92d-d2ff-4604-bcb5-6abc826e7543", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-serial-1-2/zones/us-west1-b/disks/bootstrap-e2e-4d3bb92d-d2ff-4604-bcb5-6abc826e7543' is already being used by 'projects/k8s-jkns-gci-gce-serial-1-2/zones/us-west1-b/instances/bootstrap-e2e-minion-group-5wn8', resourceInUseByAnotherResource
Jan 16 03:14:41.506: INFO: error deleting PD "bootstrap-e2e-4d3bb92d-d2ff-4604-bcb5-6abc826e7543": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-serial-1-2/zones/us-west1-b/disks/bootstrap-e2e-4d3bb92d-d2ff-4604-bcb5-6abc826e7543' is already being used by 'projects/k8s-jkns-gci-gce-serial-1-2/zones/us-west1-b/instances/bootstrap-e2e-minion-group-5wn8', resourceInUseByAnotherResource
Jan 16 03:14:41.506: INFO: Couldn't delete PD "bootstrap-e2e-4d3bb92d-d2ff-4604-bcb5-6abc826e7543", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-serial-1-2/zones/us-west1-b/disks/bootstrap-e2e-4d3bb92d-d2ff-4604-bcb5-6abc826e7543' is already being used by 'projects/k8s-jkns-gci-gce-serial-1-2/zones/us-west1-b/instances/bootstrap-e2e-minion-group-5wn8', resourceInUseByAnotherResource
Jan 16 03:14:48.713: INFO: Successfully deleted PD "bootstrap-e2e-4d3bb92d-d2ff-4604-bcb5-6abc826e7543".
Jan 16 03:14:48.713: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:14:48.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-3704" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (ext3)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ext3)] volumes should store data","total":-1,"completed":3,"skipped":7,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:14:49.193: INFO: Driver nfs doesn't support ntfs -- skipping
... skipping 40 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a read only busybox container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 49 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":30,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Zone Support
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 99 lines ...
• [SLOW TEST:45.659 seconds]
[k8s.io] KubeletManagedEtcHosts
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":40,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:14:52.772: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 74 lines ...
• [SLOW TEST:17.859 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":71,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:14:54.366: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:14:54.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 37 lines ...
• [SLOW TEST:13.851 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":46,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 76 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":8,"skipped":57,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:03.712: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:15:03.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 58 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":5,"skipped":45,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:14:21.443: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 57 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":6,"skipped":45,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:15:04.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7585" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":8,"skipped":47,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:05.279: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 259 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":7,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 12 lines ...
STEP: Destroying namespace "services-3308" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:692

•
------------------------------
{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces","total":-1,"completed":9,"skipped":62,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:06.473: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:15:06.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 41 lines ...
• [SLOW TEST:31.173 seconds]
[k8s.io] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":40,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:06.695: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 82 lines ...
• [SLOW TEST:21.197 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":10,"skipped":81,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 69 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:08.410: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:15:08.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 104 lines ...
• [SLOW TEST:16.846 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":4,"skipped":15,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:09.399: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 176 lines ...
• [SLOW TEST:20.571 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":8,"skipped":46,"failed":0}

S
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0}
[BeforeEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:14:34.090: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-7301
... skipping 40 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":17,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] [sig-node] SSH
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 13 lines ...
Jan 16 03:15:09.296: INFO: Got stdout from 34.82.76.99:22: Hello from prow@bootstrap-e2e-minion-group-dwjn
STEP: SSH'ing to 1 nodes and running echo "foo" | grep "bar"
STEP: SSH'ing to 1 nodes and running echo "stdout" && echo "stderr" >&2 && exit 7
Jan 16 03:15:10.407: INFO: Got stdout from 34.83.250.193:22: stdout
Jan 16 03:15:10.407: INFO: Got stderr from 34.83.250.193:22: stderr
STEP: SSH'ing to a nonexistent host
error dialing prow@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying
[AfterEach] [k8s.io] [sig-node] SSH
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:15:15.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ssh-8921" for this suite.


... skipping 27 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command that always fails in a pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":47,"failed":0}

SSS
------------------------------
[BeforeEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 31 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:16.798: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 24 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:15:05.610: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in topology-4456
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191
Jan 16 03:15:06.945: INFO: found topology map[failure-domain.beta.kubernetes.io/zone:us-west1-b]
Jan 16 03:15:07.547: INFO: Node name not specified for getVolumeOpCounts, falling back to listing nodes from API Server
Jan 16 03:15:12.745: INFO: Node name not specified for getVolumeOpCounts, falling back to listing nodes from API Server
Jan 16 03:15:18.785: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
... skipping 9 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Not enough topologies in cluster -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:198
------------------------------
... skipping 27 lines ...
• [SLOW TEST:10.087 seconds]
[sig-storage] Projected configMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":35,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 148 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should scale a replication controller  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":13,"skipped":50,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:24.541: INFO: Driver azure-disk doesn't support ntfs -- skipping
... skipping 155 lines ...
• [SLOW TEST:11.674 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:25.344: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:15:25.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 36 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":13,"skipped":74,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:14:14.275: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-1467
... skipping 76 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  load AppArmor profiles
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31
    can disable an AppArmor profile, using unconfined
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined","total":-1,"completed":9,"skipped":54,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:27.367: INFO: Driver hostPath doesn't support ext3 -- skipping
... skipping 80 lines ...
• [SLOW TEST:19.388 seconds]
[k8s.io] [sig-node] Events
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":6,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:27.825: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:15:27.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 30 lines ...
STEP: Destroying namespace "services-6730" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:692

•
------------------------------
{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":5,"skipped":24,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 71 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and write from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:233
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":5,"skipped":33,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:32.055: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 142 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":10,"skipped":66,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 116 lines ...
      Driver hostPath doesn't support PreprovisionedPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":5,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:13:46.345: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 65 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (ext4)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ext4)] volumes should store data","total":-1,"completed":6,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:34.247: INFO: Only supported for providers [aws] (not gce)
... skipping 15 lines ...
      Only supported for providers [aws] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1645
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":6,"skipped":20,"failed":0}
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:15:24.550: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-8743
... skipping 23 lines ...
• [SLOW TEST:10.756 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:35.310: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 149 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  Granular Checks: Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:161
    should function for node-Service: http
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:181
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for node-Service: http","total":-1,"completed":9,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:41.094: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:15:41.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 158 lines ...
• [SLOW TEST:10.708 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":14,"skipped":74,"failed":0}
[BeforeEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:15:27.009: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-1858
... skipping 12 lines ...
• [SLOW TEST:18.528 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks succeed
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:46
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":15,"skipped":74,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:45.539: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:15:45.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 113 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":11,"skipped":75,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:46.085: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:15:46.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 39 lines ...
• [SLOW TEST:12.564 seconds]
[sig-node] RuntimeClass
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:39
  should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:55
------------------------------
{"msg":"PASSED [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]","total":-1,"completed":8,"skipped":60,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:46.671: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 75 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":7,"skipped":36,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:46.734: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 182 lines ...
• [SLOW TEST:30.673 seconds]
[k8s.io] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a local redirect http liveness probe
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:232
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":3,"skipped":8,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
... skipping 155 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (block volmode)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":9,"skipped":33,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:49.552: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 120 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":11,"skipped":83,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:49.664: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:15:49.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 21 lines ...
Jan 16 03:15:46.766: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename pod-disks
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-disks-341
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Pod Disks
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74
[It] should be able to delete a non-existent PD without error
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:447
STEP: delete a PD
W0116 03:15:49.158188  116340 gce_disks.go:972] GCE persistent disk "non-exist" not found in managed zones (us-west1-b)
Jan 16 03:15:49.158: INFO: Successfully deleted PD "non-exist".
[AfterEach] [sig-storage] Pod Disks
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:15:49.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-disks-341" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Pod Disks should be able to delete a non-existent PD without error","total":-1,"completed":8,"skipped":58,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 49 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:525
    should support exec using resource/name
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:577
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":10,"skipped":64,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 137 lines ...
• [SLOW TEST:24.594 seconds]
[k8s.io] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support pod readiness gates [NodeFeature:PodReadinessGate]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:789
------------------------------
{"msg":"PASSED [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":7,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:52.427: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 15 lines ...
      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":5,"skipped":20,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:14:18.094: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename pv
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pv-616
... skipping 84 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:121
    when invoking the Recycle reclaim policy
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:264
      should test that a PV becomes Available and is clean after the PVC is deleted.
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:282
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","total":-1,"completed":6,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:52.771: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:15:52.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 24 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:63
[It] should support unsafe sysctls which are actually whitelisted
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:110
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:11.294 seconds]
[k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support unsafe sysctls which are actually whitelisted
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:110
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted","total":-1,"completed":12,"skipped":76,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:57.384: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:15:57.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 10 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192

      Only supported for providers [openstack] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1080
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":68,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:15:44.359: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-6871
... skipping 22 lines ...
• [SLOW TEST:13.160 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":68,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:57.525: INFO: Only supported for providers [aws] (not gce)
... skipping 15 lines ...
      Only supported for providers [aws] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1645
------------------------------
SSS
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] SSH should SSH to all nodes and run commands","total":-1,"completed":8,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:15:16.031: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 50 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":9,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:57.694: INFO: Only supported for providers [azure] (not gce)
... skipping 81 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should provision a volume and schedule a pod with AllowedTopologies
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:163
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":9,"skipped":75,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:58.661: INFO: Driver cinder doesn't support ntfs -- skipping
... skipping 70 lines ...
• [SLOW TEST:9.182 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":59,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 15 lines ...
[It] should honor timeout [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
Jan 16 03:15:40.117: INFO: Waiting for webhook configuration to be ready...
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
Jan 16 03:15:45.091: INFO: Waiting for webhook configuration to be ready...
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:15:57.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5665" for this suite.
STEP: Destroying namespace "webhook-5665-markers" for this suite.
... skipping 4 lines ...
• [SLOW TEST:31.233 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":6,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:59.179: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:15:59.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 58 lines ...
      Driver hostPath doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":40,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:15:27.804: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-4200
... skipping 57 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume one after the other
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":7,"skipped":40,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:15:59.897: INFO: Driver local doesn't support ntfs -- skipping
... skipping 41 lines ...
Jan 16 03:15:25.385: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8041.svc.cluster.local from pod dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b: the server could not find the requested resource (get pods dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b)
Jan 16 03:15:25.599: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8041.svc.cluster.local from pod dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b: the server could not find the requested resource (get pods dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b)
Jan 16 03:15:27.254: INFO: Unable to read jessie_udp@dns-test-service.dns-8041.svc.cluster.local from pod dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b: the server could not find the requested resource (get pods dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b)
Jan 16 03:15:27.462: INFO: Unable to read jessie_tcp@dns-test-service.dns-8041.svc.cluster.local from pod dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b: the server could not find the requested resource (get pods dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b)
Jan 16 03:15:27.780: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8041.svc.cluster.local from pod dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b: the server could not find the requested resource (get pods dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b)
Jan 16 03:15:27.954: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8041.svc.cluster.local from pod dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b: the server could not find the requested resource (get pods dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b)
Jan 16 03:15:28.927: INFO: Lookups using dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b failed for: [wheezy_udp@dns-test-service.dns-8041.svc.cluster.local wheezy_tcp@dns-test-service.dns-8041.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8041.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8041.svc.cluster.local jessie_udp@dns-test-service.dns-8041.svc.cluster.local jessie_tcp@dns-test-service.dns-8041.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8041.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8041.svc.cluster.local]

Jan 16 03:15:34.122: INFO: Unable to read wheezy_udp@dns-test-service.dns-8041.svc.cluster.local from pod dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b: the server could not find the requested resource (get pods dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b)
Jan 16 03:15:34.345: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8041.svc.cluster.local from pod dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b: the server could not find the requested resource (get pods dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b)
Jan 16 03:15:34.559: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8041.svc.cluster.local from pod dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b: the server could not find the requested resource (get pods dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b)
Jan 16 03:15:34.813: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8041.svc.cluster.local from pod dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b: the server could not find the requested resource (get pods dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b)
Jan 16 03:15:37.591: INFO: Unable to read jessie_udp@dns-test-service.dns-8041.svc.cluster.local from pod dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b: the server could not find the requested resource (get pods dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b)
Jan 16 03:15:37.863: INFO: Unable to read jessie_tcp@dns-test-service.dns-8041.svc.cluster.local from pod dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b: the server could not find the requested resource (get pods dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b)
Jan 16 03:15:37.993: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8041.svc.cluster.local from pod dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b: the server could not find the requested resource (get pods dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b)
Jan 16 03:15:38.101: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8041.svc.cluster.local from pod dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b: the server could not find the requested resource (get pods dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b)
Jan 16 03:15:40.305: INFO: Lookups using dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b failed for: [wheezy_udp@dns-test-service.dns-8041.svc.cluster.local wheezy_tcp@dns-test-service.dns-8041.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8041.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8041.svc.cluster.local jessie_udp@dns-test-service.dns-8041.svc.cluster.local jessie_tcp@dns-test-service.dns-8041.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8041.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8041.svc.cluster.local]

Jan 16 03:15:44.108: INFO: Unable to read wheezy_udp@dns-test-service.dns-8041.svc.cluster.local from pod dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b: the server could not find the requested resource (get pods dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b)
Jan 16 03:15:44.354: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8041.svc.cluster.local from pod dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b: the server could not find the requested resource (get pods dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b)
Jan 16 03:15:44.591: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8041.svc.cluster.local from pod dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b: the server could not find the requested resource (get pods dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b)
Jan 16 03:15:44.814: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8041.svc.cluster.local from pod dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b: the server could not find the requested resource (get pods dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b)
Jan 16 03:15:46.945: INFO: Unable to read jessie_udp@dns-test-service.dns-8041.svc.cluster.local from pod dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b: the server could not find the requested resource (get pods dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b)
Jan 16 03:15:47.074: INFO: Unable to read jessie_tcp@dns-test-service.dns-8041.svc.cluster.local from pod dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b: the server could not find the requested resource (get pods dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b)
Jan 16 03:15:47.192: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8041.svc.cluster.local from pod dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b: the server could not find the requested resource (get pods dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b)
Jan 16 03:15:47.474: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8041.svc.cluster.local from pod dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b: the server could not find the requested resource (get pods dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b)
Jan 16 03:15:49.368: INFO: Lookups using dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b failed for: [wheezy_udp@dns-test-service.dns-8041.svc.cluster.local wheezy_tcp@dns-test-service.dns-8041.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8041.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8041.svc.cluster.local jessie_udp@dns-test-service.dns-8041.svc.cluster.local jessie_tcp@dns-test-service.dns-8041.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8041.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8041.svc.cluster.local]

Jan 16 03:15:59.105: INFO: DNS probes using dns-8041/dns-test-be15cc77-655c-4c95-ad08-2b2c63d6552b succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 6 lines ...
• [SLOW TEST:47.001 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":9,"skipped":47,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:16:00.369: INFO: Driver local doesn't support ext4 -- skipping
... skipping 97 lines ...
      Driver gluster doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] Flexvolumes should be mountable when non-attachable","total":-1,"completed":4,"skipped":31,"failed":0}
[BeforeEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:12:41.487: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-4997
... skipping 123 lines ...
• [SLOW TEST:203.359 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not disrupt a cloud load-balancer's connectivity during rollout
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:128
------------------------------
{"msg":"PASSED [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout","total":-1,"completed":5,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:16:04.856: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:16:04.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 170 lines ...
• [SLOW TEST:14.641 seconds]
[sig-storage] HostPath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:16:07.421: INFO: Only supported for providers [openstack] (not gce)
... skipping 1254 lines ...
• [SLOW TEST:13.507 seconds]
[sig-apps] ReplicaSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":13,"skipped":77,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:16:10.902: INFO: Only supported for providers [aws] (not gce)
... skipping 38 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command in a pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:16:12.274: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 61 lines ...
• [SLOW TEST:11.784 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  custom resource defaulting for requests and from storage works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":6,"skipped":56,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:16:19.257: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:16:19.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 34 lines ...
Jan 16 03:16:06.555: INFO: Waiting for PV local-pvv58xw to bind to PVC pvc-r56kb
Jan 16 03:16:06.555: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-r56kb] to have phase Bound
Jan 16 03:16:06.983: INFO: PersistentVolumeClaim pvc-r56kb found but phase is Pending instead of Bound.
Jan 16 03:16:09.102: INFO: PersistentVolumeClaim pvc-r56kb found and phase=Bound (2.54710988s)
Jan 16 03:16:09.102: INFO: Waiting up to 3m0s for PersistentVolume local-pvv58xw to have phase Bound
Jan 16 03:16:09.405: INFO: PersistentVolume local-pvv58xw found and phase=Bound (302.698238ms)
[It] should fail scheduling due to different NodeSelector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:364
STEP: local-volume-type: dir
STEP: Initializing test volumes
Jan 16 03:16:09.769: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-daf73d4b-cebc-4121-b223-6caa66629a39] Namespace:persistent-local-volumes-test-9941 PodName:hostexec-bootstrap-e2e-minion-group-1s6w-4wdzj ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jan 16 03:16:09.769: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Creating local PVCs and PVs
... skipping 23 lines ...

• [SLOW TEST:27.651 seconds]
[sig-storage] PersistentVolumes-local 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:338
    should fail scheduling due to different NodeSelector
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:364
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":8,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:16:20.088: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:16:20.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 88 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":7,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:16:20.592: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:16:20.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 111 lines ...
• [SLOW TEST:22.629 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":10,"skipped":49,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:16:23.005: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 61 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":10,"skipped":42,"failed":0}
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:16:05.173: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename port-forwarding
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in port-forwarding-5671
... skipping 23 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on localhost
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:466
    should support forwarding over websockets
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:482
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets","total":-1,"completed":11,"skipped":42,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:16:23.506: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 70 lines ...
• [SLOW TEST:16.348 seconds]
[sig-storage] Projected secret
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:16:23.784: INFO: Driver nfs doesn't support ntfs -- skipping
... skipping 322 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and read from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":9,"skipped":75,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:16:24.547: INFO: Driver gluster doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:16:24.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 12 lines ...
      Driver gluster doesn't support ntfs -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:153
------------------------------
SS
------------------------------
{"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":3,"skipped":23,"failed":0}
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:16:09.570: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename port-forwarding
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in port-forwarding-4408
... skipping 33 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:466
    that expects a client request
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:467
      should support a client that connects, sends NO DATA, and disconnects
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:468
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":4,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:16:31.189: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:16:31.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 94 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":14,"skipped":57,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 72 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should not deadlock when a pod's predecessor fails
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:244
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails","total":-1,"completed":4,"skipped":20,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":11,"skipped":46,"failed":0}
[BeforeEach] [sig-storage] PV Protection
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 03:16:24.171: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename pv-protection
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pv-protection-9134
... skipping 26 lines ...
• [SLOW TEST:8.728 seconds]
[sig-storage] PV Protection
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify "immediate" deletion of a PV that is not bound to a PVC
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:98
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":12,"skipped":46,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:16:32.900: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:16:32.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 152 lines ...
STEP: Destroying namespace "services-9189" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:692

•
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":5,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 89 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI online volume expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:530
    should expand volume without restarting pod if attach=off, nodeExpansion=on
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:545
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":5,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 03:16:36.762: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 03:16:36.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 38521 lines ...






ng container hostpath\nephemeral-6929                       9s          Warning   Unhealthy                    pod/csi-hostpathplugin-0                                                    Liveness probe failed: Get http://10.64.3.243:9898/healthz: dial tcp 10.64.3.243:9898: connect: connection refused\nephemeral-6929                       8s          Warning   FailedPreStopHook            pod/csi-hostpathplugin-0                                                    Exec lifecycle hook ([/bin/sh -c rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock]) for Container \"node-driver-registrar\" in Pod \"csi-hostpathplugin-0_ephemeral-6929(c9fc56ec-b0ed-4bb5-b7c4-5efa57f64605)\" failed - error: command '/bin/sh -c rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock' exited with 126: , message: \"OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused \\\"exec: \\\\\\\"/bin/sh\\\\\\\": stat /bin/sh: no such file or directory\\\": unknown\\r\\n\"\nephemeral-6929                       98s         Normal    SuccessfulCreate             statefulset/csi-hostpathplugin                                              create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nephemeral-6929                       80s         Warning   FailedMount                  pod/csi-snapshotter-0                                                       MountVolume.SetUp failed for volume \"csi-snapshotter-token-6dvct\" : failed to sync secret cache: timed out waiting for the condition\nephemeral-6929                       8s          Normal    Pulled                       pod/csi-snapshotter-0                                                       Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nephemeral-6929                       8s          Normal    Created                      pod/csi-snapshotter-0                                                       Created container csi-snapshotter\nephemeral-6929                       6s          Normal    Started                      pod/csi-snapshotter-0                                                       Started container csi-snapshotter\nephemeral-6929                       4s          Warning   FailedMount                  pod/csi-snapshotter-0                                                       MountVolume.SetUp failed for volume \"csi-snapshotter-token-6dvct\" : secret \"csi-snapshotter-token-6dvct\" not found\nephemeral-6929                       81s         Warning   FailedCreate                 statefulset/csi-snapshotter                                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter failed error: pods \"csi-snapshotter-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-6929                       81s         Normal    SuccessfulCreate             statefulset/csi-snapshotter                                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nephemeral-6929                       77s         Normal    Pulled                       pod/inline-volume-tester-nmr24                                              Container image \"docker.io/library/busybox:1.29\" already present on machine\nephemeral-6929                       77s         Normal    Created                      pod/inline-volume-tester-nmr24                                              Created container csi-volume-tester\nephemeral-6929                       75s         Normal    Started                      pod/inline-volume-tester-nmr24                                              Started container csi-volume-tester\nephemeral-6929                       61s         Normal    Killing                      pod/inline-volume-tester-nmr24                                              Stopping container csi-volume-tester\nephemeral-7260                       2s          Warning   FailedCreate                 statefulset/csi-hostpath-attacher                                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-7260                       2s          Warning   FailedCreate                 statefulset/csi-hostpath-provisioner                                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-7260                       2s          Warning   FailedCreate                 statefulset/csi-hostpath-resizer                                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-7260                       2s          Normal    SuccessfulCreate             statefulset/csi-hostpathplugin                                              create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nephemeral-9378                       5m58s       Normal    Killing                      pod/csi-hostpath-attacher-0                                                 Stopping container csi-attacher\nephemeral-9378                       7m9s        Warning   FailedCreate                 statefulset/csi-hostpath-attacher                                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-9378                       7m8s        Normal    SuccessfulCreate             statefulset/csi-hostpath-attacher                                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nephemeral-9378                       5m56s       Normal    Pulled                       pod/csi-hostpath-provisioner-0                                              Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nephemeral-9378                       5m55s       Normal    Created                      pod/csi-hostpath-provisioner-0                                              Created container csi-provisioner\nephemeral-9378                       5m55s       Normal    Started                      pod/csi-hostpath-provisioner-0                                              Started container csi-provisioner\nephemeral-9378                       5m55s       Warning   FailedMount                  pod/csi-hostpath-provisioner-0                                              MountVolume.SetUp failed for volume \"csi-provisioner-token-sg42z\" : secret \"csi-provisioner-token-sg42z\" not found\nephemeral-9378                       7m9s        Warning   FailedCreate                 statefulset/csi-hostpath-provisioner                                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-9378                       7m8s        Normal    SuccessfulCreate             statefulset/csi-hostpath-provisioner                                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nephemeral-9378                       5m56s       Normal    Pulled                       pod/csi-hostpath-resizer-0                                                  Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\nephemeral-9378                       5m55s       Normal    Created                      pod/csi-hostpath-resizer-0                                                  Created container csi-resizer\nephemeral-9378                       5m55s       Normal    Started                      pod/csi-hostpath-resizer-0                                                  Started container csi-resizer\nephemeral-9378                       5m52s       Warning   FailedMount                  pod/csi-hostpath-resizer-0                                                  MountVolume.SetUp failed for volume \"csi-resizer-token-qp9q9\" : secret \"csi-resizer-token-qp9q9\" not found\nephemeral-9378                       5m53s       Normal    Killing                      pod/csi-hostpath-resizer-0                                                  Stopping container csi-resizer\nephemeral-9378                       7m9s        Warning   FailedCreate                 statefulset/csi-hostpath-resizer                                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-9378                       7m9s        Normal    SuccessfulCreate             statefulset/csi-hostpath-resizer                                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nephemeral-9378                       7m4s        Normal    Pulled                       pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nephemeral-9378                       7m4s        Normal    Created                      pod/csi-hostpathplugin-0                                                    Created container node-driver-registrar\nephemeral-9378                       7m1s        Normal    Started                      pod/csi-hostpathplugin-0                                                    Started container node-driver-registrar\nephemeral-9378                       7m1s        Normal    Pulled                       pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\" already present on machine\nephemeral-9378                       7m          Normal    Created                      pod/csi-hostpathplugin-0                                                    Created container hostpath\nephemeral-9378                       6m59s       Normal    Started                      pod/csi-hostpathplugin-0                                                    Started container hostpath\nephemeral-9378                       6m59s       Normal    Pulled                       pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/livenessprobe:v1.1.0\" already present on machine\nephemeral-9378                       6m59s       Normal    Created                      pod/csi-hostpathplugin-0                                                    Created container liveness-probe\nephemeral-9378                       6m58s       Normal    Started                      pod/csi-hostpathplugin-0                                                    Started container liveness-probe\nephemeral-9378                       5m57s       Normal    Killing                      pod/csi-hostpathplugin-0                                                    Stopping container node-driver-registrar\nephemeral-9378                       5m57s       Normal    Killing                      pod/csi-hostpathplugin-0                                                    Stopping container hostpath\nephemeral-9378                       5m57s       Normal    Killing                      pod/csi-hostpathplugin-0                                                    Stopping container liveness-probe\nephemeral-9378                       5m57s       Warning   FailedPreStopHook            pod/csi-hostpathplugin-0                                                    Exec lifecycle hook ([/bin/sh -c rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock]) for Container \"node-driver-registrar\" in Pod \"csi-hostpathplugin-0_ephemeral-9378(24e51864-12f6-4c45-b091-c39609b73e8b)\" failed - error: command '/bin/sh -c rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock' exited with 126: , message: \"OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused \\\"exec: \\\\\\\"/bin/sh\\\\\\\": stat /bin/sh: no such file or directory\\\": unknown\\r\\n\"\nephemeral-9378                       7m10s       Normal    SuccessfulCreate             statefulset/csi-hostpathplugin                                              create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nephemeral-9378                       5m56s       Normal    Pulled                       pod/csi-snapshotter-0                                                       Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nephemeral-9378                       5m56s       Normal    Created                      pod/csi-snapshotter-0                                                       Created container csi-snapshotter\nephemeral-9378                       5m55s       Normal    Started                      pod/csi-snapshotter-0                                                       Started container csi-snapshotter\nephemeral-9378                       5m52s       Warning   FailedMount                  pod/csi-snapshotter-0                                                       MountVolume.SetUp failed for volume \"csi-snapshotter-token-wbwsw\" : secret \"csi-snapshotter-token-wbwsw\" not found\nephemeral-9378                       7m9s        Warning   FailedCreate                 statefulset/csi-snapshotter                                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter failed error: pods \"csi-snapshotter-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-9378                       7m9s        Normal    SuccessfulCreate             statefulset/csi-snapshotter                                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nephemeral-9378                       7m          Warning   FailedMount                  pod/inline-volume-tester-qv96s                                              MountVolume.SetUp failed for volume \"my-volume-0\" : kubernetes.io/csi: mounter.SetUpAt failed to get CSI client: driver name csi-hostpath-ephemeral-9378 not found in the list of registered CSI drivers\nephemeral-9378                       6m51s       Normal    Pulled                       pod/inline-volume-tester-qv96s                                              Container image \"docker.io/library/busybox:1.29\" already present on machine\nephemeral-9378                       6m51s       Normal    Created                      pod/inline-volume-tester-qv96s                                              Created container csi-volume-tester\nephemeral-9378                       6m51s       Normal    Started                      pod/inline-volume-tester-qv96s                                              Started container csi-volume-tester\nephemeral-9378                       6m45s       Normal    Killing                      pod/inline-volume-tester-qv96s                                              Stopping container csi-volume-tester\nflexvolume-6593                      2m13s       Normal    SuccessfulAttachVolume       pod/flex-client                                                             AttachVolume.Attach succeeded for volume \"flex-volume-0\"\nflexvolume-6593                      2m11s       Normal    Pulled                       pod/flex-client                                                             Container image \"docker.io/library/busybox:1.29\" already present on machine\nflexvolume-6593                      2m11s       Normal    Created                      pod/flex-client                                                             Created container flex-client\nflexvolume-6593                      2m10s       Normal    Started                      pod/flex-client                                                             Started container flex-client\nflexvolume-6593                      2m          Normal    Killing                      pod/flex-client                                                             Stopping container flex-client\ngc-880                               61s         Normal    Scheduled                    pod/pod1                                                                    Successfully assigned gc-880/pod1 to bootstrap-e2e-minion-group-7htw\ngc-880                               56s         Normal    Pulled                       pod/pod1                                                                    Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-880                               56s         Warning   Failed                       pod/pod1                                                                    Error: cannot find volume \"default-token-gftcd\" to mount into container \"nginx\"\ngc-880                               61s         Normal    Scheduled                    pod/pod2                                                                    Successfully assigned gc-880/pod2 to bootstrap-e2e-minion-group-7htw\ngc-880                               56s         Normal    Pulled                       pod/pod2                                                                    Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-880                               56s         Warning   Failed                       pod/pod2                                                                    Error: cannot find volume \"default-token-gftcd\" to mount into container \"nginx\"\ngc-880                               60s         Normal    Scheduled                    pod/pod3                                                                    Successfully assigned gc-880/pod3 to bootstrap-e2e-minion-group-7htw\ngc-880                               56s         Normal    Pulled                       pod/pod3                                                                    Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-880                               56s         Warning   Failed                       pod/pod3                                                                    Error: cannot find volume \"default-token-gftcd\" to mount into container \"nginx\"\nhostpath-7612                        32s         Normal    Scheduled                    pod/pod-host-path-test                                                      Successfully assigned hostpath-7612/pod-host-path-test to bootstrap-e2e-minion-group-7htw\nhostpath-7612                        28s         Normal    Pulled                       pod/pod-host-path-test                                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nhostpath-7612                        28s         Normal    Created                      pod/pod-host-path-test                                                      Created container test-container-1\nhostpath-7612                        26s         Normal    Started                      pod/pod-host-path-test                                                      Started container test-container-1\nhostpath-7612                        26s         Normal    Pulled                       pod/pod-host-path-test                                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nhostpath-7612                        26s         Normal    Created                      pod/pod-host-path-test                                                      Created container test-container-2\nhostpath-7612                        23s         Normal    Started                      pod/pod-host-path-test                                                      Started container test-container-2\ninit-container-7299                  35s         Normal    Scheduled                    pod/pod-init-29df2517-cbae-4a17-b532-68a744169c40                           Successfully assigned init-container-7299/pod-init-29df2517-cbae-4a17-b532-68a744169c40 to bootstrap-e2e-minion-group-7htw\ninit-container-7299                  31s         Normal    Pulled                       pod/pod-init-29df2517-cbae-4a17-b532-68a744169c40                           Container image \"docker.io/library/busybox:1.29\" already present on machine\ninit-container-7299                  30s         Normal    Created                      pod/pod-init-29df2517-cbae-4a17-b532-68a744169c40                           Created container init1\ninit-container-7299                  29s         Normal    Started                      pod/pod-init-29df2517-cbae-4a17-b532-68a744169c40                           Started container init1\ninit-container-7299                  25s         Normal    Pulled                       pod/pod-init-29df2517-cbae-4a17-b532-68a744169c40                           Container image \"docker.io/library/busybox:1.29\" already present on machine\ninit-container-7299                  25s         Normal    Created                      pod/pod-init-29df2517-cbae-4a17-b532-68a744169c40                           Created container init2\ninit-container-7299                  23s         Normal    Started                      pod/pod-init-29df2517-cbae-4a17-b532-68a744169c40                           Started container init2\ninit-container-7299                  21s         Normal    Pulled                       pod/pod-init-29df2517-cbae-4a17-b532-68a744169c40                           Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ninit-container-7299                  21s         Normal    Created                      pod/pod-init-29df2517-cbae-4a17-b532-68a744169c40                           Created container run1\ninit-container-7299                  19s         Normal    Started                      pod/pod-init-29df2517-cbae-4a17-b532-68a744169c40                           Started container run1\njob-1377                             3m3s        Normal    Scheduled                    pod/fail-once-local-khbgk                                                   Successfully assigned job-1377/fail-once-local-khbgk to bootstrap-e2e-minion-group-7htw\njob-1377                             2m50s       Normal    Pulled                       pod/fail-once-local-khbgk                                                   Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-1377                             2m50s       Normal    Created                      pod/fail-once-local-khbgk                                                   Created container c\njob-1377                             2m47s       Normal    Started                      pod/fail-once-local-khbgk                                                   Started container c\njob-1377                             2m41s       Normal    Scheduled                    pod/fail-once-local-rkp48                                                   Successfully assigned job-1377/fail-once-local-rkp48 to bootstrap-e2e-minion-group-5wn8\njob-1377                             2m31s       Normal    Pulled                       pod/fail-once-local-rkp48                                                   Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-1377                             2m31s       Normal    Created                      pod/fail-once-local-rkp48                                                   Created container c\njob-1377                             2m28s       Normal    Started                      pod/fail-once-local-rkp48                                                   Started container c\njob-1377                             3m3s        Normal    Scheduled                    pod/fail-once-local-spczg                                                   Successfully assigned job-1377/fail-once-local-spczg to bootstrap-e2e-minion-group-7htw\njob-1377                             2m53s       Normal    Pulled                       pod/fail-once-local-spczg                                                   Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-1377                             2m53s       Normal    Created                      pod/fail-once-local-spczg                                                   Created container c\njob-1377                             2m49s       Normal    Started                      pod/fail-once-local-spczg                                                   Started container c\njob-1377                             2m46s       Normal    Scheduled                    pod/fail-once-local-zlvhl                                                   Successfully assigned job-1377/fail-once-local-zlvhl to bootstrap-e2e-minion-group-5wn8\njob-1377                             2m37s       Normal    Pulled                       pod/fail-once-local-zlvhl                                                   Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-1377                             2m36s       Normal    Created                      pod/fail-once-local-zlvhl                                                   Created container c\njob-1377                             2m34s       Normal    Started                      pod/fail-once-local-zlvhl                                                   Started container c\njob-1377                             3m3s        Normal    SuccessfulCreate             job/fail-once-local                                                         Created pod: fail-once-local-spczg\njob-1377                             3m3s        Normal    SuccessfulCreate             job/fail-once-local                                                         Created pod: fail-once-local-khbgk\njob-1377                             2m46s       Normal    SuccessfulCreate             job/fail-once-local                                                         Created pod: fail-once-local-zlvhl\njob-1377                             2m42s       Normal    SuccessfulCreate             job/fail-once-local                                                         Created pod: fail-once-local-rkp48\njob-1377                             2m24s       Normal    Completed                    job/fail-once-local                                                         Job completed\njob-6727                             5m46s       Normal    Scheduled                    pod/exceed-active-deadline-429mt                                            Successfully assigned job-6727/exceed-active-deadline-429mt to bootstrap-e2e-minion-group-5wn8\njob-6727                             5m46s       Normal    Scheduled                    pod/exceed-active-deadline-fs4mg                                            Successfully assigned job-6727/exceed-active-deadline-fs4mg to bootstrap-e2e-minion-group-7htw\njob-6727                             5m45s       Warning   FailedMount                  pod/exceed-active-deadline-fs4mg                                            MountVolume.SetUp failed for volume \"default-token-c5w5x\" : failed to sync secret cache: timed out waiting for the condition\njob-6727                             5m46s       Normal    SuccessfulCreate             job/exceed-active-deadline                                                  Created pod: exceed-active-deadline-429mt\njob-6727                             5m46s       Normal    SuccessfulCreate             job/exceed-active-deadline                                                  Created pod: exceed-active-deadline-fs4mg\njob-6727                             5m45s       Normal    SuccessfulDelete             job/exceed-active-deadline                                                  Deleted pod: exceed-active-deadline-429mt\njob-6727                             5m45s       Normal    SuccessfulDelete             job/exceed-active-deadline                                                  Deleted pod: exceed-active-deadline-fs4mg\njob-6727                             5m45s       Warning   DeadlineExceeded             job/exceed-active-deadline                                                  Job was active longer than specified deadline\njob-6870                             3m21s       Normal    Scheduled                    pod/foo-44s4k                                                               Successfully assigned job-6870/foo-44s4k to bootstrap-e2e-minion-group-dwjn\njob-6870                             3m18s       Normal    Pulled                       pod/foo-44s4k                                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-6870                             3m18s       Normal    Created                      pod/foo-44s4k                                                               Created container c\njob-6870                             3m18s       Normal    Started                      pod/foo-44s4k                                                               Started container c\njob-6870                             3m5s        Normal    Killing                      pod/foo-44s4k                                                               Stopping container c\njob-6870                             3m21s       Normal    Scheduled                    pod/foo-j7djv                                                               Successfully assigned job-6870/foo-j7djv to bootstrap-e2e-minion-group-7htw\njob-6870                             3m17s       Normal    Pulled                       pod/foo-j7djv                                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-6870                             3m16s       Normal    Created                      pod/foo-j7djv                                                               Created container c\njob-6870                             3m14s       Normal    Started                      pod/foo-j7djv                                                               Started container c\njob-6870                             3m5s        Normal    Killing                      pod/foo-j7djv                                                               Stopping container c\njob-6870                             3m21s       Normal    SuccessfulCreate             job/foo                                                                     Created pod: foo-44s4k\njob-6870                             3m21s       Normal    SuccessfulCreate             job/foo                                                                     Created pod: foo-j7djv\nkube-system                          19m         Warning   FailedScheduling             pod/coredns-65567c7b57-jjjjz                                                no nodes available to schedule pods\nkube-system                          19m         Warning   FailedScheduling             pod/coredns-65567c7b57-jjjjz                                                0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          19m         Warning   FailedScheduling             pod/coredns-65567c7b57-jjjjz                                                0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          19m         Normal    Scheduled                    pod/coredns-65567c7b57-jjjjz                                                Successfully assigned kube-system/coredns-65567c7b57-jjjjz to bootstrap-e2e-minion-group-dwjn\nkube-system                          19m         Normal    Pulling                      pod/coredns-65567c7b57-jjjjz                                                Pulling image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          19m         Normal    Pulled                       pod/coredns-65567c7b57-jjjjz                                                Successfully pulled image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          19m         Normal    Created                      pod/coredns-65567c7b57-jjjjz                                                Created container coredns\nkube-system                          19m         Normal    Started                      pod/coredns-65567c7b57-jjjjz                                                Started container coredns\nkube-system                          19m         Normal    Scheduled                    pod/coredns-65567c7b57-qsm6h                                                Successfully assigned kube-system/coredns-65567c7b57-qsm6h to bootstrap-e2e-minion-group-7htw\nkube-system                          19m         Normal    Pulling                      pod/coredns-65567c7b57-qsm6h                                                Pulling image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          19m         Normal    Pulled                       pod/coredns-65567c7b57-qsm6h                                                Successfully pulled image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          19m         Normal    Created                      pod/coredns-65567c7b57-qsm6h                                                Created container coredns\nkube-system                          19m         Normal    Started                      pod/coredns-65567c7b57-qsm6h                                                Started container coredns\nkube-system                          19m         Warning   FailedCreate                 replicaset/coredns-65567c7b57                                               Error creating: pods \"coredns-65567c7b57-\" is forbidden: no providers available to validate pod request\nkube-system                          19m         Warning   FailedCreate                 replicaset/coredns-65567c7b57                                               Error creating: pods \"coredns-65567c7b57-\" is forbidden: unable to validate against any pod security policy: []\nkube-system                          19m         Normal    SuccessfulCreate             replicaset/coredns-65567c7b57                                               Created pod: coredns-65567c7b57-jjjjz\nkube-system                          19m         Normal    SuccessfulCreate             replicaset/coredns-65567c7b57                                               Created pod: coredns-65567c7b57-qsm6h\nkube-system                          19m         Normal    ScalingReplicaSet            deployment/coredns                                                          Scaled up replica set coredns-65567c7b57 to 1\nkube-system                          19m         Normal    ScalingReplicaSet            deployment/coredns                                                          Scaled up replica set coredns-65567c7b57 to 2\nkube-system                          19m         Warning   FailedScheduling             pod/event-exporter-v0.3.1-747b47fcd-t9mr9                                   no nodes available to schedule pods\nkube-system                          19m         Warning   FailedScheduling             pod/event-exporter-v0.3.1-747b47fcd-t9mr9                                   0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          19m         Warning   FailedScheduling             pod/event-exporter-v0.3.1-747b47fcd-t9mr9                                   0/2 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 1 node(s) were unschedulable.\nkube-system                          19m         Warning   FailedScheduling             pod/event-exporter-v0.3.1-747b47fcd-t9mr9                                   0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          19m         Normal    Scheduled                    pod/event-exporter-v0.3.1-747b47fcd-t9mr9                                   Successfully assigned kube-system/event-exporter-v0.3.1-747b47fcd-t9mr9 to bootstrap-e2e-minion-group-5wn8\nkube-system                          19m         Normal    Pulling                      pod/event-exporter-v0.3.1-747b47fcd-t9mr9                                   Pulling image \"k8s.gcr.io/event-exporter:v0.3.1\"\nkube-system                          19m         Normal    Pulled                       pod/event-exporter-v0.3.1-747b47fcd-t9mr9                                   Successfully pulled image \"k8s.gcr.io/event-exporter:v0.3.1\"\nkube-system                          19m         Normal    Created                      pod/event-exporter-v0.3.1-747b47fcd-t9mr9                                   Created container event-exporter\nkube-system                          19m         Normal    Started                      pod/event-exporter-v0.3.1-747b47fcd-t9mr9                                   Started container event-exporter\nkube-system                          19m         Normal    Pulling                      pod/event-exporter-v0.3.1-747b47fcd-t9mr9                                   Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.7.2\"\nkube-system                          19m         Normal    Pulled                       pod/event-exporter-v0.3.1-747b47fcd-t9mr9                                   Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.7.2\"\nkube-system                          19m         Normal    Created                      pod/event-exporter-v0.3.1-747b47fcd-t9mr9                                   Created container prometheus-to-sd-exporter\nkube-system                          19m         Normal    Started                      pod/event-exporter-v0.3.1-747b47fcd-t9mr9                                   Started container prometheus-to-sd-exporter\nkube-system                          19m         Normal    SuccessfulCreate             replicaset/event-exporter-v0.3.1-747b47fcd                                  Created pod: event-exporter-v0.3.1-747b47fcd-t9mr9\nkube-system                          19m         Normal    ScalingReplicaSet            deployment/event-exporter-v0.3.1                                            Scaled up replica set event-exporter-v0.3.1-747b47fcd to 1\nkube-system                          19m         Warning   FailedScheduling             pod/fluentd-gcp-scaler-76d9c77b4d-qhpx4                                     no nodes available to schedule pods\nkube-system                          19m         Warning   FailedScheduling             pod/fluentd-gcp-scaler-76d9c77b4d-qhpx4                                     0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          19m         Warning   FailedScheduling             pod/fluentd-gcp-scaler-76d9c77b4d-qhpx4                                     0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          19m         Normal    Scheduled                    pod/fluentd-gcp-scaler-76d9c77b4d-qhpx4                                     Successfully assigned kube-system/fluentd-gcp-scaler-76d9c77b4d-qhpx4 to bootstrap-e2e-minion-group-7htw\nkube-system                          19m         Normal    Pulling                      pod/fluentd-gcp-scaler-76d9c77b4d-qhpx4                                     Pulling image \"k8s.gcr.io/fluentd-gcp-scaler:0.5.2\"\nkube-system                          19m         Normal    Pulled                       pod/fluentd-gcp-scaler-76d9c77b4d-qhpx4                                     Successfully pulled image \"k8s.gcr.io/fluentd-gcp-scaler:0.5.2\"\nkube-system                          19m         Normal    Created                      pod/fluentd-gcp-scaler-76d9c77b4d-qhpx4                                     Created container fluentd-gcp-scaler\nkube-system                          19m         Normal    Started                      pod/fluentd-gcp-scaler-76d9c77b4d-qhpx4                                     Started container fluentd-gcp-scaler\nkube-system                          19m         Normal    SuccessfulCreate             replicaset/fluentd-gcp-scaler-76d9c77b4d                                    Created pod: fluentd-gcp-scaler-76d9c77b4d-qhpx4\nkube-system                          19m         Normal    ScalingReplicaSet            deployment/fluentd-gcp-scaler                                               Scaled up replica set fluentd-gcp-scaler-76d9c77b4d to 1\nkube-system                          18m         Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-2x9kf                                                Successfully assigned kube-system/fluentd-gcp-v3.2.0-2x9kf to bootstrap-e2e-minion-group-7htw\nkube-system                          18m         Normal    Pulled                       pod/fluentd-gcp-v3.2.0-2x9kf                                                Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          18m         Normal    Created                      pod/fluentd-gcp-v3.2.0-2x9kf                                                Created container fluentd-gcp\nkube-system                          18m         Normal    Started                      pod/fluentd-gcp-v3.2.0-2x9kf                                                Started container fluentd-gcp\nkube-system                          18m         Normal    Pulled                       pod/fluentd-gcp-v3.2.0-2x9kf                                                Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          18m         Normal    Created                      pod/fluentd-gcp-v3.2.0-2x9kf                                                Created container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Started                      pod/fluentd-gcp-v3.2.0-2x9kf                                                Started container prometheus-to-sd-exporter\nkube-system                          19m         Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-5zqm7                                                Successfully assigned kube-system/fluentd-gcp-v3.2.0-5zqm7 to bootstrap-e2e-master\nkube-system                          19m         Normal    Pulling                      pod/fluentd-gcp-v3.2.0-5zqm7                                                Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          19m         Normal    Pulled                       pod/fluentd-gcp-v3.2.0-5zqm7                                                Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          19m         Normal    Created                      pod/fluentd-gcp-v3.2.0-5zqm7                                                Created container fluentd-gcp\nkube-system                          19m         Normal    Started                      pod/fluentd-gcp-v3.2.0-5zqm7                                                Started container fluentd-gcp\nkube-system                          19m         Normal    Pulled                       pod/fluentd-gcp-v3.2.0-5zqm7                                                Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          19m         Normal    Created                      pod/fluentd-gcp-v3.2.0-5zqm7                                                Created container prometheus-to-sd-exporter\nkube-system                          19m         Normal    Started                      pod/fluentd-gcp-v3.2.0-5zqm7                                                Started container prometheus-to-sd-exporter\nkube-system                          19m         Normal    SandboxChanged               pod/fluentd-gcp-v3.2.0-5zqm7                                                Pod sandbox changed, it will be killed and re-created.\nkube-system                          19m         Normal    Killing                      pod/fluentd-gcp-v3.2.0-5zqm7                                                Stopping container fluentd-gcp\nkube-system                          19m         Normal    Killing                      pod/fluentd-gcp-v3.2.0-5zqm7                                                Stopping container prometheus-to-sd-exporter\nkube-system                          19m         Normal    Pulled                       pod/fluentd-gcp-v3.2.0-5zqm7                                                Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          18m         Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-cgh89                                                Successfully assigned kube-system/fluentd-gcp-v3.2.0-cgh89 to bootstrap-e2e-minion-group-1s6w\nkube-system                          18m         Normal    Pulled                       pod/fluentd-gcp-v3.2.0-cgh89                                                Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          18m         Normal    Created                      pod/fluentd-gcp-v3.2.0-cgh89                                                Created container fluentd-gcp\nkube-system                          18m         Normal    Started                      pod/fluentd-gcp-v3.2.0-cgh89                                                Started container fluentd-gcp\nkube-system                          18m         Normal    Pulled                       pod/fluentd-gcp-v3.2.0-cgh89                                                Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          18m         Normal    Created                      pod/fluentd-gcp-v3.2.0-cgh89                                                Created container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Started                      pod/fluentd-gcp-v3.2.0-cgh89                                                Started container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-dm9dj                                                Successfully assigned kube-system/fluentd-gcp-v3.2.0-dm9dj to bootstrap-e2e-minion-group-dwjn\nkube-system                          18m         Normal    Pulled                       pod/fluentd-gcp-v3.2.0-dm9dj                                                Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          18m         Normal    Created                      pod/fluentd-gcp-v3.2.0-dm9dj                                                Created container fluentd-gcp\nkube-system                          18m         Normal    Started                      pod/fluentd-gcp-v3.2.0-dm9dj                                                Started container fluentd-gcp\nkube-system                          18m         Normal    Pulled                       pod/fluentd-gcp-v3.2.0-dm9dj                                                Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          18m         Normal    Created                      pod/fluentd-gcp-v3.2.0-dm9dj                                                Created container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Started                      pod/fluentd-gcp-v3.2.0-dm9dj                                                Started container prometheus-to-sd-exporter\nkube-system                          19m         Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-g4bxb                                                Successfully assigned kube-system/fluentd-gcp-v3.2.0-g4bxb to bootstrap-e2e-minion-group-1s6w\nkube-system                          19m         Warning   FailedMount                  pod/fluentd-gcp-v3.2.0-g4bxb                                                MountVolume.SetUp failed for volume \"config-volume\" : failed to sync configmap cache: timed out waiting for the condition\nkube-system                          19m         Warning   FailedMount                  pod/fluentd-gcp-v3.2.0-g4bxb                                                MountVolume.SetUp failed for volume \"fluentd-gcp-token-bnw87\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          19m         Normal    Pulling                      pod/fluentd-gcp-v3.2.0-g4bxb                                                Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          19m         Normal    Pulled                       pod/fluentd-gcp-v3.2.0-g4bxb                                                Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          19m         Normal    Created                      pod/fluentd-gcp-v3.2.0-g4bxb                                                Created container fluentd-gcp\nkube-system                          19m         Normal    Started                      pod/fluentd-gcp-v3.2.0-g4bxb                                                Started container fluentd-gcp\nkube-system                          19m         Normal    Pulled                       pod/fluentd-gcp-v3.2.0-g4bxb                                                Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          19m         Normal    Created                      pod/fluentd-gcp-v3.2.0-g4bxb                                                Created container prometheus-to-sd-exporter\nkube-system                          19m         Normal    Started                      pod/fluentd-gcp-v3.2.0-g4bxb                                                Started container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Killing                      pod/fluentd-gcp-v3.2.0-g4bxb                                                Stopping container fluentd-gcp\nkube-system                          18m         Normal    Killing                      pod/fluentd-gcp-v3.2.0-g4bxb                                                Stopping container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-h9wjw                                                Successfully assigned kube-system/fluentd-gcp-v3.2.0-h9wjw to bootstrap-e2e-master\nkube-system                          18m         Normal    Pulled                       pod/fluentd-gcp-v3.2.0-h9wjw                                                Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          18m         Normal    Created                      pod/fluentd-gcp-v3.2.0-h9wjw                                                Created container fluentd-gcp\nkube-system                          18m         Normal    Started                      pod/fluentd-gcp-v3.2.0-h9wjw                                                Started container fluentd-gcp\nkube-system                          18m         Normal    Pulled                       pod/fluentd-gcp-v3.2.0-h9wjw                                                Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          18m         Normal    Created                      pod/fluentd-gcp-v3.2.0-h9wjw                                                Created container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Started                      pod/fluentd-gcp-v3.2.0-h9wjw                                                Started container prometheus-to-sd-exporter\nkube-system                          19m         Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-lfwjp                                                Successfully assigned kube-system/fluentd-gcp-v3.2.0-lfwjp to bootstrap-e2e-minion-group-dwjn\nkube-system                          19m         Normal    Pulling                      pod/fluentd-gcp-v3.2.0-lfwjp                                                Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          19m         Normal    Pulled                       pod/fluentd-gcp-v3.2.0-lfwjp                                                Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          19m         Normal    Created                      pod/fluentd-gcp-v3.2.0-lfwjp                                                Created container fluentd-gcp\nkube-system                          19m         Normal    Started                      pod/fluentd-gcp-v3.2.0-lfwjp                                                Started container fluentd-gcp\nkube-system                          19m         Normal    Pulled                       pod/fluentd-gcp-v3.2.0-lfwjp                                                Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          19m         Normal    Created                      pod/fluentd-gcp-v3.2.0-lfwjp                                                Created container prometheus-to-sd-exporter\nkube-system                          19m         Normal    Started                      pod/fluentd-gcp-v3.2.0-lfwjp                                                Started container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Killing                      pod/fluentd-gcp-v3.2.0-lfwjp                                                Stopping container fluentd-gcp\nkube-system                          18m         Normal    Killing                      pod/fluentd-gcp-v3.2.0-lfwjp                                                Stopping container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-nbn5t                                                Successfully assigned kube-system/fluentd-gcp-v3.2.0-nbn5t to bootstrap-e2e-minion-group-5wn8\nkube-system                          18m         Normal    Pulled                       pod/fluentd-gcp-v3.2.0-nbn5t                                                Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          18m         Normal    Created                      pod/fluentd-gcp-v3.2.0-nbn5t                                                Created container fluentd-gcp\nkube-system                          18m         Normal    Started                      pod/fluentd-gcp-v3.2.0-nbn5t                                                Started container fluentd-gcp\nkube-system                          18m         Normal    Pulled                       pod/fluentd-gcp-v3.2.0-nbn5t                                                Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          18m         Normal    Created                      pod/fluentd-gcp-v3.2.0-nbn5t                                                Created container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Started                      pod/fluentd-gcp-v3.2.0-nbn5t                                                Started container prometheus-to-sd-exporter\nkube-system                          19m         Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-ptwjj                                                Successfully assigned kube-system/fluentd-gcp-v3.2.0-ptwjj to bootstrap-e2e-minion-group-7htw\nkube-system                          19m         Normal    Pulling                      pod/fluentd-gcp-v3.2.0-ptwjj                                                Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          19m         Normal    Pulled                       pod/fluentd-gcp-v3.2.0-ptwjj                                                Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          19m         Normal    Created                      pod/fluentd-gcp-v3.2.0-ptwjj                                                Created container fluentd-gcp\nkube-system                          19m         Normal    Started                      pod/fluentd-gcp-v3.2.0-ptwjj                                                Started container fluentd-gcp\nkube-system                          19m         Normal    Pulled                       pod/fluentd-gcp-v3.2.0-ptwjj                                                Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          19m         Normal    Created                      pod/fluentd-gcp-v3.2.0-ptwjj                                                Created container prometheus-to-sd-exporter\nkube-system                          19m         Normal    Started                      pod/fluentd-gcp-v3.2.0-ptwjj                                                Started container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Killing                      pod/fluentd-gcp-v3.2.0-ptwjj                                                Stopping container fluentd-gcp\nkube-system                          18m         Normal    Killing                      pod/fluentd-gcp-v3.2.0-ptwjj                                                Stopping container prometheus-to-sd-exporter\nkube-system                          19m         Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-z4ms9                                                Successfully assigned kube-system/fluentd-gcp-v3.2.0-z4ms9 to bootstrap-e2e-minion-group-5wn8\nkube-system                          19m         Warning   FailedMount                  pod/fluentd-gcp-v3.2.0-z4ms9                                                MountVolume.SetUp failed for volume \"config-volume\" : failed to sync configmap cache: timed out waiting for the condition\nkube-system                          19m         Normal    Pulling                      pod/fluentd-gcp-v3.2.0-z4ms9                                                Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          19m         Normal    Pulled                       pod/fluentd-gcp-v3.2.0-z4ms9                                                Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          19m         Normal    Created                      pod/fluentd-gcp-v3.2.0-z4ms9                                                Created container fluentd-gcp\nkube-system                          19m         Normal    Started                      pod/fluentd-gcp-v3.2.0-z4ms9                                                Started container fluentd-gcp\nkube-system                          19m         Normal    Pulled                       pod/fluentd-gcp-v3.2.0-z4ms9                                                Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          19m         Normal    Created                      pod/fluentd-gcp-v3.2.0-z4ms9                                                Created container prometheus-to-sd-exporter\nkube-system                          19m         Normal    Started                      pod/fluentd-gcp-v3.2.0-z4ms9                                                Started container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Killing                      pod/fluentd-gcp-v3.2.0-z4ms9                                                Stopping container fluentd-gcp\nkube-system                          18m         Normal    Killing                      pod/fluentd-gcp-v3.2.0-z4ms9                                                Stopping container prometheus-to-sd-exporter\nkube-system                          19m         Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                                Created pod: fluentd-gcp-v3.2.0-5zqm7\nkube-system                          19m         Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                                Created pod: fluentd-gcp-v3.2.0-ptwjj\nkube-system                          19m         Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                                Created pod: fluentd-gcp-v3.2.0-lfwjp\nkube-system                          19m         Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                                Created pod: fluentd-gcp-v3.2.0-g4bxb\nkube-system                          19m         Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                                Created pod: fluentd-gcp-v3.2.0-z4ms9\nkube-system                          19m         Normal    SuccessfulDelete             daemonset/fluentd-gcp-v3.2.0                                                Deleted pod: fluentd-gcp-v3.2.0-5zqm7\nkube-system                          18m         Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                                Created pod: fluentd-gcp-v3.2.0-h9wjw\nkube-system                          18m         Normal    SuccessfulDelete             daemonset/fluentd-gcp-v3.2.0                                                Deleted pod: fluentd-gcp-v3.2.0-ptwjj\nkube-system                          18m         Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                                Created pod: fluentd-gcp-v3.2.0-2x9kf\nkube-system                          18m         Normal    SuccessfulDelete             daemonset/fluentd-gcp-v3.2.0                                                Deleted pod: fluentd-gcp-v3.2.0-g4bxb\nkube-system                          18m         Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                                Created pod: fluentd-gcp-v3.2.0-cgh89\nkube-system                          18m         Normal    SuccessfulDelete             daemonset/fluentd-gcp-v3.2.0                                                Deleted pod: fluentd-gcp-v3.2.0-lfwjp\nkube-system                          18m         Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                                Created pod: fluentd-gcp-v3.2.0-dm9dj\nkube-system                          18m         Normal    SuccessfulDelete             daemonset/fluentd-gcp-v3.2.0                                                Deleted pod: fluentd-gcp-v3.2.0-z4ms9\nkube-system                          18m         Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                                (combined from similar events): Created pod: fluentd-gcp-v3.2.0-nbn5t\nkube-system                          19m         Normal    LeaderElection               configmap/ingress-gce-lock                                                  bootstrap-e2e-master_ddbd2 became leader\nkube-system                          20m         Normal    LeaderElection               endpoints/kube-controller-manager                                           bootstrap-e2e-master_70a05227-8db0-4dd4-9a4d-093f4dc9430b became leader\nkube-system                          20m         Normal    LeaderElection               lease/kube-controller-manager                                               bootstrap-e2e-master_70a05227-8db0-4dd4-9a4d-093f4dc9430b became leader\nkube-system                          19m         Warning   FailedScheduling             pod/kube-dns-autoscaler-65bc6d4889-vfc5h                                    no nodes available to schedule pods\nkube-system                          19m         Warning   FailedScheduling             pod/kube-dns-autoscaler-65bc6d4889-vfc5h                                    0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          19m         Warning   FailedScheduling             pod/kube-dns-autoscaler-65bc6d4889-vfc5h                                    0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          19m         Normal    Scheduled                    pod/kube-dns-autoscaler-65bc6d4889-vfc5h                                    Successfully assigned kube-system/kube-dns-autoscaler-65bc6d4889-vfc5h to bootstrap-e2e-minion-group-1s6w\nkube-system                          19m         Normal    Pulling                      pod/kube-dns-autoscaler-65bc6d4889-vfc5h                                    Pulling image \"k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1\"\nkube-system                          19m         Normal    Pulled                       pod/kube-dns-autoscaler-65bc6d4889-vfc5h                                    Successfully pulled image \"k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1\"\nkube-system                          19m         Normal    Created                      pod/kube-dns-autoscaler-65bc6d4889-vfc5h                                    Created container autoscaler\nkube-system                          19m         Normal    Started                      pod/kube-dns-autoscaler-65bc6d4889-vfc5h                                    Started container autoscaler\nkube-system                          19m         Warning   FailedCreate                 replicaset/kube-dns-autoscaler-65bc6d4889                                   Error creating: pods \"kube-dns-autoscaler-65bc6d4889-\" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount \"kube-dns-autoscaler\" not found\nkube-system                          19m         Normal    SuccessfulCreate             replicaset/kube-dns-autoscaler-65bc6d4889                                   Created pod: kube-dns-autoscaler-65bc6d4889-vfc5h\nkube-system                          19m         Normal    ScalingReplicaSet            deployment/kube-dns-autoscaler                                              Scaled up replica set kube-dns-autoscaler-65bc6d4889 to 1\nkube-system                          19m         Normal    Pulled                       pod/kube-proxy-bootstrap-e2e-minion-group-1s6w                              Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.789_5d1c3016103d83\" already present on machine\nkube-system                          19m         Normal    Created                      pod/kube-proxy-bootstrap-e2e-minion-group-1s6w                              Created container kube-proxy\nkube-system                          19m         Normal    Started                      pod/kube-proxy-bootstrap-e2e-minion-group-1s6w                              Started container kube-proxy\nkube-system                          19m         Normal    Pulled                       pod/kube-proxy-bootstrap-e2e-minion-group-5wn8                              Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.789_5d1c3016103d83\" already present on machine\nkube-system                          19m         Normal    Created                      pod/kube-proxy-bootstrap-e2e-minion-group-5wn8                              Created container kube-proxy\nkube-system                          19m         Normal    Started                      pod/kube-proxy-bootstrap-e2e-minion-group-5wn8                              Started container kube-proxy\nkube-system                          19m         Normal    Pulled                       pod/kube-proxy-bootstrap-e2e-minion-group-7htw                              Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.789_5d1c3016103d83\" already present on machine\nkube-system                          19m         Normal    Created                      pod/kube-proxy-bootstrap-e2e-minion-group-7htw                              Created container kube-proxy\nkube-system                          19m         Normal    Started                      pod/kube-proxy-bootstrap-e2e-minion-group-7htw                              Started container kube-proxy\nkube-system                          19m         Normal    Pulled                       pod/kube-proxy-bootstrap-e2e-minion-group-dwjn                              Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.789_5d1c3016103d83\" already present on machine\nkube-system                          19m         Normal    Created                      pod/kube-proxy-bootstrap-e2e-minion-group-dwjn                              Created container kube-proxy\nkube-system                          19m         Normal    Started                      pod/kube-proxy-bootstrap-e2e-minion-group-dwjn                              Started container kube-proxy\nkube-system                          20m         Normal    LeaderElection               endpoints/kube-scheduler                                                    bootstrap-e2e-master_4bb84b0f-3f03-462d-b09a-6315a38ed97a became leader\nkube-system                          20m         Normal    LeaderElection               lease/kube-scheduler                                                        bootstrap-e2e-master_4bb84b0f-3f03-462d-b09a-6315a38ed97a became leader\nkube-system                          19m         Warning   FailedScheduling             pod/kubernetes-dashboard-7778f8b456-b8kqv                                   no nodes available to schedule pods\nkube-system                          19m         Warning   FailedScheduling             pod/kubernetes-dashboard-7778f8b456-b8kqv                                   0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          19m         Warning   FailedScheduling             pod/kubernetes-dashboard-7778f8b456-b8kqv                                   0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          19m         Normal    Scheduled                    pod/kubernetes-dashboard-7778f8b456-b8kqv                                   Successfully assigned kube-system/kubernetes-dashboard-7778f8b456-b8kqv to bootstrap-e2e-minion-group-dwjn\nkube-system                          19m         Normal    Pulling                      pod/kubernetes-dashboard-7778f8b456-b8kqv                                   Pulling image \"k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1\"\nkube-system                          19m         Normal    Pulled                       pod/kubernetes-dashboard-7778f8b456-b8kqv                                   Successfully pulled image \"k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1\"\nkube-system                          19m         Normal    Created                      pod/kubernetes-dashboard-7778f8b456-b8kqv                                   Created container kubernetes-dashboard\nkube-system                          19m         Normal    Started                      pod/kubernetes-dashboard-7778f8b456-b8kqv                                   Started container kubernetes-dashboard\nkube-system                          19m         Normal    SuccessfulCreate             replicaset/kubernetes-dashboard-7778f8b456                                  Created pod: kubernetes-dashboard-7778f8b456-b8kqv\nkube-system                          19m         Normal    ScalingReplicaSet            deployment/kubernetes-dashboard                                             Scaled up replica set kubernetes-dashboard-7778f8b456 to 1\nkube-system                          19m         Warning   FailedScheduling             pod/l7-default-backend-678889f899-rrxvq                                     no nodes available to schedule pods\nkube-system                          19m         Warning   FailedScheduling             pod/l7-default-backend-678889f899-rrxvq                                     0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          19m         Warning   FailedScheduling             pod/l7-default-backend-678889f899-rrxvq                                     0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          19m         Normal    Scheduled                    pod/l7-default-backend-678889f899-rrxvq                                     Successfully assigned kube-system/l7-default-backend-678889f899-rrxvq to bootstrap-e2e-minion-group-dwjn\nkube-system                          19m         Normal    Pulling                      pod/l7-default-backend-678889f899-rrxvq                                     Pulling image \"k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0\"\nkube-system                          19m         Normal    Pulled                       pod/l7-default-backend-678889f899-rrxvq                                     Successfully pulled image \"k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0\"\nkube-system                          19m         Normal    Created                      pod/l7-default-backend-678889f899-rrxvq                                     Created container default-http-backend\nkube-system                          19m         Normal    Started                      pod/l7-default-backend-678889f899-rrxvq                                     Started container default-http-backend\nkube-system                          19m         Warning   FailedCreate                 replicaset/l7-default-backend-678889f899                                    Error creating: pods \"l7-default-backend-678889f899-\" is forbidden: no providers available to validate pod request\nkube-system                          19m         Warning   FailedCreate                 replicaset/l7-default-backend-678889f899                                    Error creating: pods \"l7-default-backend-678889f899-\" is forbidden: unable to validate against any pod security policy: []\nkube-system                          19m         Normal    SuccessfulCreate             replicaset/l7-default-backend-678889f899                                    Created pod: l7-default-backend-678889f899-rrxvq\nkube-system                          19m         Normal    ScalingReplicaSet            deployment/l7-default-backend                                               Scaled up replica set l7-default-backend-678889f899 to 1\nkube-system                          19m         Normal    Created                      pod/l7-lb-controller-bootstrap-e2e-master                                   Created container l7-lb-controller\nkube-system                          19m         Normal    Started                      pod/l7-lb-controller-bootstrap-e2e-master                                   Started container l7-lb-controller\nkube-system                          19m         Normal    Pulled                       pod/l7-lb-controller-bootstrap-e2e-master                                   Container image \"k8s.gcr.io/ingress-gce-glbc-amd64:v1.6.1\" already present on machine\nkube-system                          19m         Normal    Scheduled                    pod/metadata-proxy-v0.1-886px                                               Successfully assigned kube-system/metadata-proxy-v0.1-886px to bootstrap-e2e-minion-group-dwjn\nkube-system                          19m         Normal    Pulling                      pod/metadata-proxy-v0.1-886px                                               Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          19m         Normal    Pulled                       pod/metadata-proxy-v0.1-886px                                               Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          19m         Normal    Created                      pod/metadata-proxy-v0.1-886px                                               Created container metadata-proxy\nkube-system                          19m         Normal    Started                      pod/metadata-proxy-v0.1-886px                                               Started container metadata-proxy\nkube-system                          19m         Normal    Pulling                      pod/metadata-proxy-v0.1-886px                                               Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          19m         Normal    Pulled                       pod/metadata-proxy-v0.1-886px                                               Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          19m         Normal    Created                      pod/metadata-proxy-v0.1-886px                                               Created container prometheus-to-sd-exporter\nkube-system                          19m         Normal    Started                      pod/metadata-proxy-v0.1-886px                                               Started container prometheus-to-sd-exporter\nkube-system                          19m         Normal    Scheduled                    pod/metadata-proxy-v0.1-bnsmz                                               Successfully assigned kube-system/metadata-proxy-v0.1-bnsmz to bootstrap-e2e-minion-group-5wn8\nkube-system                          19m         Normal    Pulling                      pod/metadata-proxy-v0.1-bnsmz                                               Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          19m         Normal    Pulled                       pod/metadata-proxy-v0.1-bnsmz                                               Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          19m         Normal    Created                      pod/metadata-proxy-v0.1-bnsmz                                               Created container metadata-proxy\nkube-system                          19m         Normal    Started                      pod/metadata-proxy-v0.1-bnsmz                                               Started container metadata-proxy\nkube-system                          19m         Normal    Pulling                      pod/metadata-proxy-v0.1-bnsmz                                               Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          19m         Normal    Pulled                       pod/metadata-proxy-v0.1-bnsmz                                               Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          19m         Normal    Created                      pod/metadata-proxy-v0.1-bnsmz                                               Created container prometheus-to-sd-exporter\nkube-system                          19m         Normal    Started                      pod/metadata-proxy-v0.1-bnsmz                                               Started container prometheus-to-sd-exporter\nkube-system                          19m         Normal    Scheduled                    pod/metadata-proxy-v0.1-fg59k                                               Successfully assigned kube-system/metadata-proxy-v0.1-fg59k to bootstrap-e2e-minion-group-1s6w\nkube-system                          19m         Warning   FailedMount                  pod/metadata-proxy-v0.1-fg59k                                               MountVolume.SetUp failed for volume \"metadata-proxy-token-nxz69\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          19m         Normal    Pulling                      pod/metadata-proxy-v0.1-fg59k                                               Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          19m         Normal    Pulled                       pod/metadata-proxy-v0.1-fg59k                                               Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          19m         Normal    Created                      pod/metadata-proxy-v0.1-fg59k                                               Created container metadata-proxy\nkube-system                          19m         Normal    Started                      pod/metadata-proxy-v0.1-fg59k                                               Started container metadata-proxy\nkube-system                          19m         Normal    Pulling                      pod/metadata-proxy-v0.1-fg59k                                               Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          19m         Normal    Pulled                       pod/metadata-proxy-v0.1-fg59k                                               Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          19m         Normal    Created                      pod/metadata-proxy-v0.1-fg59k                                               Created container prometheus-to-sd-exporter\nkube-system                          19m         Normal    Started                      pod/metadata-proxy-v0.1-fg59k                                               Started container prometheus-to-sd-exporter\nkube-system                          19m         Normal    Scheduled                    pod/metadata-proxy-v0.1-n6sbs                                               Successfully assigned kube-system/metadata-proxy-v0.1-n6sbs to bootstrap-e2e-minion-group-7htw\nkube-system                          19m         Normal    Pulling                      pod/metadata-proxy-v0.1-n6sbs                                               Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          19m         Normal    Pulled                       pod/metadata-proxy-v0.1-n6sbs                                               Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          19m         Normal    Created                      pod/metadata-proxy-v0.1-n6sbs                                               Created container metadata-proxy\nkube-system                          19m         Normal    Started                      pod/metadata-proxy-v0.1-n6sbs                                               Started container metadata-proxy\nkube-system                          19m         Normal    Pulling                      pod/metadata-proxy-v0.1-n6sbs                                               Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          19m         Normal    Pulled                       pod/metadata-proxy-v0.1-n6sbs                                               Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          19m         Normal    Created                      pod/metadata-proxy-v0.1-n6sbs                                               Created container prometheus-to-sd-exporter\nkube-system                          19m         Normal    Started                      pod/metadata-proxy-v0.1-n6sbs                                               Started container prometheus-to-sd-exporter\nkube-system                          19m         Normal    Scheduled                    pod/metadata-proxy-v0.1-zw7r2                                               Successfully assigned kube-system/metadata-proxy-v0.1-zw7r2 to bootstrap-e2e-master\nkube-system                          19m         Normal    Pulling                      pod/metadata-proxy-v0.1-zw7r2                                               Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          19m         Normal    Pulled                       pod/metadata-proxy-v0.1-zw7r2                                               Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          19m         Normal    Created                      pod/metadata-proxy-v0.1-zw7r2                                               Created container metadata-proxy\nkube-system                          19m         Normal    Started                      pod/metadata-proxy-v0.1-zw7r2                                               Started container metadata-proxy\nkube-system                          19m         Normal    Pulling                      pod/metadata-proxy-v0.1-zw7r2                                               Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          19m         Normal    Pulled                       pod/metadata-proxy-v0.1-zw7r2                                               Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          19m         Normal    Created                      pod/metadata-proxy-v0.1-zw7r2                                               Created container prometheus-to-sd-exporter\nkube-system                          19m         Normal    Started                      pod/metadata-proxy-v0.1-zw7r2                                               Started container prometheus-to-sd-exporter\nkube-system                          19m         Normal    SuccessfulCreate             daemonset/metadata-proxy-v0.1                                               Created pod: metadata-proxy-v0.1-zw7r2\nkube-system                          19m         Normal    SuccessfulCreate             daemonset/metadata-proxy-v0.1                                               Created pod: metadata-proxy-v0.1-n6sbs\nkube-system                          19m         Normal    SuccessfulCreate             daemonset/metadata-proxy-v0.1                                               Created pod: metadata-proxy-v0.1-bnsmz\nkube-system                          19m         Normal    SuccessfulCreate             daemonset/metadata-proxy-v0.1                                               Created pod: metadata-proxy-v0.1-886px\nkube-system                          19m         Normal    SuccessfulCreate             daemonset/metadata-proxy-v0.1                                               Created pod: metadata-proxy-v0.1-fg59k\nkube-system                          19m         Normal    Scheduled                    pod/metrics-server-v0.3.6-5f859c87d6-sm7ct                                  Successfully assigned kube-system/metrics-server-v0.3.6-5f859c87d6-sm7ct to bootstrap-e2e-minion-group-dwjn\nkube-system                          19m         Normal    Pulling                      pod/metrics-server-v0.3.6-5f859c87d6-sm7ct                                  Pulling image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          19m         Normal    Pulled                       pod/metrics-server-v0.3.6-5f859c87d6-sm7ct                                  Successfully pulled image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          19m         Normal    Created                      pod/metrics-server-v0.3.6-5f859c87d6-sm7ct                                  Created container metrics-server\nkube-system                          19m         Normal    Started                      pod/metrics-server-v0.3.6-5f859c87d6-sm7ct                                  Started container metrics-server\nkube-system                          19m         Normal    Pulling                      pod/metrics-server-v0.3.6-5f859c87d6-sm7ct                                  Pulling image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          18m         Normal    Pulled                       pod/metrics-server-v0.3.6-5f859c87d6-sm7ct                                  Successfully pulled image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          18m         Normal    Created                      pod/metrics-server-v0.3.6-5f859c87d6-sm7ct                                  Created container metrics-server-nanny\nkube-system                          18m         Normal    Started                      pod/metrics-server-v0.3.6-5f859c87d6-sm7ct                                  Started container metrics-server-nanny\nkube-system                          19m         Normal    SuccessfulCreate             replicaset/metrics-server-v0.3.6-5f859c87d6                                 Created pod: metrics-server-v0.3.6-5f859c87d6-sm7ct\nkube-system                          19m         Warning   FailedScheduling             pod/metrics-server-v0.3.6-65d4dc878-wqkg8                                   no nodes available to schedule pods\nkube-system                          19m         Warning   FailedScheduling             pod/metrics-server-v0.3.6-65d4dc878-wqkg8                                   0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          19m         Warning   FailedScheduling             pod/metrics-server-v0.3.6-65d4dc878-wqkg8                                   0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          19m         Normal    Scheduled                    pod/metrics-server-v0.3.6-65d4dc878-wqkg8                                   Successfully assigned kube-system/metrics-server-v0.3.6-65d4dc878-wqkg8 to bootstrap-e2e-minion-group-1s6w\nkube-system                          19m         Normal    Pulling                      pod/metrics-server-v0.3.6-65d4dc878-wqkg8                                   Pulling image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          19m         Normal    Pulled                       pod/metrics-server-v0.3.6-65d4dc878-wqkg8                                   Successfully pulled image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          19m         Normal    Created                      pod/metrics-server-v0.3.6-65d4dc878-wqkg8                                   Created container metrics-server\nkube-system                          19m         Normal    Started                      pod/metrics-server-v0.3.6-65d4dc878-wqkg8                                   Started container metrics-server\nkube-system                          19m         Normal    Pulling                      pod/metrics-server-v0.3.6-65d4dc878-wqkg8                                   Pulling image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          19m         Normal    Pulled                       pod/metrics-server-v0.3.6-65d4dc878-wqkg8                                   Successfully pulled image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          19m         Normal    Created                      pod/metrics-server-v0.3.6-65d4dc878-wqkg8                                   Created container metrics-server-nanny\nkube-system                          19m         Normal    Started                      pod/metrics-server-v0.3.6-65d4dc878-wqkg8                                   Started container metrics-server-nanny\nkube-system                          18m         Normal    Killing                      pod/metrics-server-v0.3.6-65d4dc878-wqkg8                                   Stopping container metrics-server\nkube-system                          18m         Normal    Killing                      pod/metrics-server-v0.3.6-65d4dc878-wqkg8                                   Stopping container metrics-server-nanny\nkube-system                          19m         Warning   FailedCreate                 replicaset/metrics-server-v0.3.6-65d4dc878                                  Error creating: pods \"metrics-server-v0.3.6-65d4dc878-\" is forbidden: unable to validate against any pod security policy: []\nkube-system                          19m         Normal    SuccessfulCreate             replicaset/metrics-server-v0.3.6-65d4dc878                                  Created pod: metrics-server-v0.3.6-65d4dc878-wqkg8\nkube-system                          18m         Normal    SuccessfulDelete             replicaset/metrics-server-v0.3.6-65d4dc878                                  Deleted pod: metrics-server-v0.3.6-65d4dc878-wqkg8\nkube-system                          19m         Normal    ScalingReplicaSet            deployment/metrics-server-v0.3.6                                            Scaled up replica set metrics-server-v0.3.6-65d4dc878 to 1\nkube-system                          19m         Normal    ScalingReplicaSet            deployment/metrics-server-v0.3.6                                            Scaled up replica set metrics-server-v0.3.6-5f859c87d6 to 1\nkube-system                          18m         Normal    ScalingReplicaSet            deployment/metrics-server-v0.3.6                                            Scaled down replica set metrics-server-v0.3.6-65d4dc878 to 0\nkube-system                          19m         Warning   FailedScheduling             pod/volume-snapshot-controller-0                                            no nodes available to schedule pods\nkube-system                          19m         Warning   FailedScheduling             pod/volume-snapshot-controller-0                                            0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          19m         Warning   FailedScheduling             pod/volume-snapshot-controller-0                                            0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          19m         Normal    Scheduled                    pod/volume-snapshot-controller-0                                            Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-1s6w\nkube-system                          19m         Normal    Pulling                      pod/volume-snapshot-controller-0                                            Pulling image \"quay.io/k8scsi/snapshot-controller:v2.0.0-rc2\"\nkube-system                          19m         Normal    Pulled                       pod/volume-snapshot-controller-0                                            Successfully pulled image \"quay.io/k8scsi/snapshot-controller:v2.0.0-rc2\"\nkube-system                          19m         Normal    Created                      pod/volume-snapshot-controller-0                                            Created container volume-snapshot-controller\nkube-system                          19m         Normal    Started                      pod/volume-snapshot-controller-0                                            Started container volume-snapshot-controller\nkube-system                          19m         Normal    SuccessfulCreate             statefulset/volume-snapshot-controller                                      create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful\nkubectl-1935                         106s        Normal    Scheduled                    pod/pause                                                                   Successfully assigned kubectl-1935/pause to bootstrap-e2e-minion-group-1s6w\nkubectl-1935                         104s        Normal    Pulled                       pod/pause                                                                   Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubectl-1935                         104s        Normal    Created                      pod/pause                                                                   Created container pause\nkubectl-1935                         103s        Normal    Started                      pod/pause                                                                   Started container pause\nkubectl-1935                         98s         Normal    Killing                      pod/pause                                                                   Stopping container pause\nkubectl-2492                         2m55s       Normal    Scheduled                    pod/agnhost-master-h5f4d                                                    Successfully assigned kubectl-2492/agnhost-master-h5f4d to bootstrap-e2e-minion-group-5wn8\nkubectl-2492                         2m56s       Normal    SuccessfulCreate             replicationcontroller/agnhost-master                                        Created pod: agnhost-master-h5f4d\nkubectl-2492                         2m55s       Normal    SuccessfulCreate             replicationcontroller/agnhost-master                                        Created pod: agnhost-master-ph6xx\nkubectl-2570                         5m41s       Normal    Scheduled                    pod/e2e-test-httpd-rc-16fec10f5b8cf02faaeb846102335944-fzzx2                Successfully assigned kubectl-2570/e2e-test-httpd-rc-16fec10f5b8cf02faaeb846102335944-fzzx2 to bootstrap-e2e-minion-group-1s6w\nkubectl-2570                         5m38s       Normal    Pulled                       pod/e2e-test-httpd-rc-16fec10f5b8cf02faaeb846102335944-fzzx2                Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nkubectl-2570                         5m37s       Normal    Created                      pod/e2e-test-httpd-rc-16fec10f5b8cf02faaeb846102335944-fzzx2                Created container e2e-test-httpd-rc\nkubectl-2570                         5m35s       Normal    Started                      pod/e2e-test-httpd-rc-16fec10f5b8cf02faaeb846102335944-fzzx2                Started container e2e-test-httpd-rc\nkubectl-2570                         5m12s       Normal    Killing                      pod/e2e-test-httpd-rc-16fec10f5b8cf02faaeb846102335944-fzzx2                Stopping container e2e-test-httpd-rc\nkubectl-2570                         5m42s       Normal    SuccessfulCreate             replicationcontroller/e2e-test-httpd-rc-16fec10f5b8cf02faaeb846102335944    Created pod: e2e-test-httpd-rc-16fec10f5b8cf02faaeb846102335944-fzzx2\nkubectl-2570                         5m45s       Normal    Scheduled                    pod/e2e-test-httpd-rc-flksr                                                 Successfully assigned kubectl-2570/e2e-test-httpd-rc-flksr to bootstrap-e2e-minion-group-dwjn\nkubectl-2570                         5m41s       Normal    Pulled                       pod/e2e-test-httpd-rc-flksr                                                 Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nkubectl-2570                         5m41s       Normal    Created                      pod/e2e-test-httpd-rc-flksr                                                 Created container e2e-test-httpd-rc\nkubectl-2570                         5m40s       Normal    Started                      pod/e2e-test-httpd-rc-flksr                                                 Started container e2e-test-httpd-rc\nkubectl-2570                         5m31s       Normal    Killing                      pod/e2e-test-httpd-rc-flksr                                                 Stopping container e2e-test-httpd-rc\nkubectl-2570                         5m45s       Normal    SuccessfulCreate             replicationcontroller/e2e-test-httpd-rc                                     Created pod: e2e-test-httpd-rc-flksr\nkubectl-2570                         5m31s       Normal    SuccessfulDelete             replicationcontroller/e2e-test-httpd-rc                                     Deleted pod: e2e-test-httpd-rc-flksr\nkubectl-290                          5m31s       Normal    Scheduled                    pod/httpd                                                                   Successfully assigned kubectl-290/httpd to bootstrap-e2e-minion-group-dwjn\nkubectl-290                          5m28s       Normal    Pulled                       pod/httpd                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nkubectl-290                          5m28s       Normal    Created                      pod/httpd                                                                   Created container httpd\nkubectl-290                          5m28s       Normal    Started                      pod/httpd                                                                   Started container httpd\nkubectl-290                          4m16s       Normal    Killing                      pod/httpd                                                                   Stopping container httpd\nkubectl-3942                         3s          Normal    Scheduled                    pod/failure-1                                                               Successfully assigned kubectl-3942/failure-1 to bootstrap-e2e-minion-group-7htw\nkubectl-3942                         43s         Normal    Scheduled                    pod/httpd                                                                   Successfully assigned kubectl-3942/httpd to bootstrap-e2e-minion-group-7htw\nkubectl-3942                         38s         Normal    Pulled                       pod/httpd                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nkubectl-3942                         38s         Normal    Created                      pod/httpd                                                                   Created container httpd\nkubectl-3942                         35s         Normal    Started                      pod/httpd                                                                   Started container httpd\nkubectl-3942                         12s         Normal    Scheduled                    pod/success                                                                 Successfully assigned kubectl-3942/success to bootstrap-e2e-minion-group-dwjn\nkubectl-3942                         10s         Normal    Pulled                       pod/success                                                                 Container image \"docker.io/library/busybox:1.29\" already present on machine\nkubectl-3942                         10s         Normal    Created                      pod/success                                                                 Created container success\nkubectl-3942                         9s          Normal    Started                      pod/success                                                                 Started container success\nkubectl-490                          3m19s       Normal    Scheduled                    pod/e2e-test-httpd-deployment-594dddd44f-zk5gt                              Successfully assigned kubectl-490/e2e-test-httpd-deployment-594dddd44f-zk5gt to bootstrap-e2e-minion-group-1s6w\nkubectl-490                          3m19s       Normal    SuccessfulCreate             replicaset/e2e-test-httpd-deployment-594dddd44f                             Created pod: e2e-test-httpd-deployment-594dddd44f-zk5gt\nkubectl-490                          3m19s       Normal    ScalingReplicaSet            deployment/e2e-test-httpd-deployment                                        Scaled up replica set e2e-test-httpd-deployment-594dddd44f to 1\nkubectl-5463                         5m48s       Normal    Scheduled                    pod/httpd                                                                   Successfully assigned kubectl-5463/httpd to bootstrap-e2e-minion-group-5wn8\nkubectl-5463                         5m45s       Normal    Pulled                       pod/httpd                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nkubectl-5463                         5m44s       Normal    Created                      pod/httpd                                                                   Created container httpd\nkubectl-5463                         5m42s       Normal    Started                      pod/httpd                                                                   Started container httpd\nkubectl-5463                         5m10s       Normal    Killing                      pod/httpd                                                                   Stopping container httpd\nkubectl-668                          2m53s       Normal    Scheduled                    pod/e2e-test-httpd-deployment-594dddd44f-5b48t                              Successfully assigned kubectl-668/e2e-test-httpd-deployment-594dddd44f-5b48t to bootstrap-e2e-minion-group-dwjn\nkubectl-668                          2m51s       Normal    Pulled                       pod/e2e-test-httpd-deployment-594dddd44f-5b48t                              Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nkubectl-668                          2m51s       Normal    Created                      pod/e2e-test-httpd-deployment-594dddd44f-5b48t                              Created container e2e-test-httpd-deployment\nkubectl-668                          2m50s       Normal    Started                      pod/e2e-test-httpd-deployment-594dddd44f-5b48t                              Started container e2e-test-httpd-deployment\nkubectl-668                          2m53s       Normal    SuccessfulCreate             replicaset/e2e-test-httpd-deployment-594dddd44f                             Created pod: e2e-test-httpd-deployment-594dddd44f-5b48t\nkubectl-668                          2m53s       Normal    ScalingReplicaSet            deployment/e2e-test-httpd-deployment                                        Scaled up replica set e2e-test-httpd-deployment-594dddd44f to 1\nkubectl-9342                         <unknown>                                                                                                                      some data here\nkubectl-9342                         5s          Warning   FailedScheduling             pod/pod1pmzdzdx7jj                                                          0/5 nodes are available: 1 node(s) were unschedulable, 4 Insufficient cpu.\nkubectl-9342                         4s          Warning   FailedScheduling             pod/pod1pmzdzdx7jj                                                          skip schedule deleting pod: kubectl-9342/pod1pmzdzdx7jj\nkubectl-9342                         6s          Normal    Scheduled                    pod/rc1pmzdzdx7jj-lhcwg                                                     Successfully assigned kubectl-9342/rc1pmzdzdx7jj-lhcwg to bootstrap-e2e-minion-group-dwjn\nkubectl-9342                         4s          Normal    Pulling                      pod/rc1pmzdzdx7jj-lhcwg                                                     Pulling image \"fedora:latest\"\nkubectl-9342                         6s          Normal    SuccessfulCreate             replicationcontroller/rc1pmzdzdx7jj                                         Created pod: rc1pmzdzdx7jj-lhcwg\nkubectl-9814                         5m42s       Normal    Scheduled                    pod/e2e-test-rm-busybox-job-k5bns                                           Successfully assigned kubectl-9814/e2e-test-rm-busybox-job-k5bns to bootstrap-e2e-minion-group-7htw\nkubectl-9814                         5m31s       Normal    Pulled                       pod/e2e-test-rm-busybox-job-k5bns                                           Container image \"docker.io/library/busybox:1.29\" already present on machine\nkubectl-9814                         5m31s       Normal    Created                      pod/e2e-test-rm-busybox-job-k5bns                                           Created container e2e-test-rm-busybox-job\nkubectl-9814                         5m24s       Normal    Started                      pod/e2e-test-rm-busybox-job-k5bns                                           Started container e2e-test-rm-busybox-job\nkubectl-9814                         5m43s       Normal    SuccessfulCreate             job/e2e-test-rm-busybox-job                                                 Created pod: e2e-test-rm-busybox-job-k5bns\nkubelet-test-5003                    3m17s       Normal    Scheduled                    pod/busybox-host-aliases9bfd13fb-897c-4c68-9b4c-5ad1f65f711c                Successfully assigned kubelet-test-5003/busybox-host-aliases9bfd13fb-897c-4c68-9b4c-5ad1f65f711c to bootstrap-e2e-minion-group-dwjn\nkubelet-test-5003                    3m16s       Normal    Pulled                       pod/busybox-host-aliases9bfd13fb-897c-4c68-9b4c-5ad1f65f711c                Container image \"docker.io/library/busybox:1.29\" already present on machine\nkubelet-test-5003                    3m15s       Normal    Created                      pod/busybox-host-aliases9bfd13fb-897c-4c68-9b4c-5ad1f65f711c                Created container busybox-host-aliases9bfd13fb-897c-4c68-9b4c-5ad1f65f711c\nkubelet-test-5003                    3m15s       Normal    Started                      pod/busybox-host-aliases9bfd13fb-897c-4c68-9b4c-5ad1f65f711c                Started container busybox-host-aliases9bfd13fb-897c-4c68-9b4c-5ad1f65f711c\nmount-propagation-2103               7m9s        Normal    Pulled                       pod/default                                                                 Container image \"docker.io/library/busybox:1.29\" already present on machine\nmount-propagation-2103               7m9s        Normal    Created                      pod/default                                                                 Created container cntr\nmount-propagation-2103               7m7s        Normal    Started                      pod/default                                                                 Started container cntr\nmount-propagation-2103               6m12s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-5wn8-xwxg8                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nmount-propagation-2103               6m12s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-xwxg8                          Created container agnhost\nmount-propagation-2103               6m11s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-xwxg8                          Started container agnhost\nmount-propagation-2103               5m7s        Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-xwxg8                          Stopping container agnhost\nmount-propagation-2103               7m34s       Normal    Pulled                       pod/master                                                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nmount-propagation-2103               7m33s       Normal    Created                      pod/master                                                                  Created container cntr\nmount-propagation-2103               7m33s       Normal    Started                      pod/master                                                                  Started container cntr\nmount-propagation-2103               7m22s       Normal    Pulled                       pod/private                                                                 Container image \"docker.io/library/busybox:1.29\" already present on machine\nmount-propagation-2103               7m22s       Normal    Created                      pod/private                                                                 Created container cntr\nmount-propagation-2103               7m20s       Normal    Started                      pod/private                                                                 Started container cntr\nmount-propagation-2103               7m27s       Normal    Pulled                       pod/slave                                                                   Container image \"docker.io/library/busybox:1.29\" already present on machine\nmount-propagation-2103               7m27s       Normal    Created                      pod/slave                                                                   Created container cntr\nmount-propagation-2103               7m27s       Normal    Started                      pod/slave                                                                   Started container cntr\nmounted-volume-expand-4219           6m45s       Normal    SuccessfulCreate             replicaset/deployment-c40a0dfa-e620-4647-8aa2-2242fe70f463-548dc777fb       Created pod: deployment-c40a0dfa-e620-4647-8aa2-2242fe70f463-548dc777fbx62vx\nmounted-volume-expand-4219           6m7s        Normal    SuccessfulCreate             replicaset/deployment-c40a0dfa-e620-4647-8aa2-2242fe70f463-548dc777fb       Created pod: deployment-c40a0dfa-e620-4647-8aa2-2242fe70f463-548dc777fb2vhq6\nmounted-volume-expand-4219           6m7s        Normal    Scheduled                    pod/deployment-c40a0dfa-e620-4647-8aa2-2242fe70f463-548dc777fb2vhq6         Successfully assigned mounted-volume-expand-4219/deployment-c40a0dfa-e620-4647-8aa2-2242fe70f463-548dc777fb2vhq6 to bootstrap-e2e-minion-group-5wn8\nmounted-volume-expand-4219           6m5s        Normal    FileSystemResizeSuccessful   pod/deployment-c40a0dfa-e620-4647-8aa2-2242fe70f463-548dc777fb2vhq6         MountVolume.NodeExpandVolume succeeded for volume \"pvc-f0a2e75b-ddcb-4e25-9502-adb80f6b6f41\"\nmounted-volume-expand-4219           6m1s        Normal    Pulled                       pod/deployment-c40a0dfa-e620-4647-8aa2-2242fe70f463-548dc777fb2vhq6         Container image \"docker.io/library/busybox:1.29\" already present on machine\nmounted-volume-expand-4219           6m1s        Normal    Created                      pod/deployment-c40a0dfa-e620-4647-8aa2-2242fe70f463-548dc777fb2vhq6         Created container write-pod\nmounted-volume-expand-4219           6m1s        Normal    Started                      pod/deployment-c40a0dfa-e620-4647-8aa2-2242fe70f463-548dc777fb2vhq6         Started container write-pod\nmounted-volume-expand-4219           6m41s       Normal    Scheduled                    pod/deployment-c40a0dfa-e620-4647-8aa2-2242fe70f463-548dc777fbx62vx         Successfully assigned mounted-volume-expand-4219/deployment-c40a0dfa-e620-4647-8aa2-2242fe70f463-548dc777fbx62vx to bootstrap-e2e-minion-group-5wn8\nmounted-volume-expand-4219           6m35s       Normal    SuccessfulAttachVolume       pod/deployment-c40a0dfa-e620-4647-8aa2-2242fe70f463-548dc777fbx62vx         AttachVolume.Attach succeeded for volume \"pvc-f0a2e75b-ddcb-4e25-9502-adb80f6b6f41\"\nmounted-volume-expand-4219           6m29s       Normal    Pulled                       pod/deployment-c40a0dfa-e620-4647-8aa2-2242fe70f463-548dc777fbx62vx         Container image \"docker.io/library/busybox:1.29\" already present on machine\nmounted-volume-expand-4219           6m29s       Normal    Created                      pod/deployment-c40a0dfa-e620-4647-8aa2-2242fe70f463-548dc777fbx62vx         Created container write-pod\nmounted-volume-expand-4219           6m28s       Normal    Started                      pod/deployment-c40a0dfa-e620-4647-8aa2-2242fe70f463-548dc777fbx62vx         Started container write-pod\nmounted-volume-expand-4219           6m8s        Normal    Killing                      pod/deployment-c40a0dfa-e620-4647-8aa2-2242fe70f463-548dc777fbx62vx         Stopping container write-pod\nmounted-volume-expand-4219           6m45s       Normal    ScalingReplicaSet            deployment/deployment-c40a0dfa-e620-4647-8aa2-2242fe70f463                  Scaled up replica set deployment-c40a0dfa-e620-4647-8aa2-2242fe70f463-548dc777fb to 1\nmounted-volume-expand-4219           6m46s       Normal    WaitForFirstConsumer         persistentvolumeclaim/pvc-x694c                                             waiting for first consumer to be created before binding\nmounted-volume-expand-4219           6m42s       Normal    ProvisioningSucceeded        persistentvolumeclaim/pvc-x694c                                             Successfully provisioned volume pvc-f0a2e75b-ddcb-4e25-9502-adb80f6b6f41 using kubernetes.io/gce-pd\nmounted-volume-expand-4219           6m5s        Normal    FileSystemResizeSuccessful   persistentvolumeclaim/pvc-x694c                                             MountVolume.NodeExpandVolume succeeded for volume \"pvc-f0a2e75b-ddcb-4e25-9502-adb80f6b6f41\"\npersistent-local-volumes-test-1422   5m54s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-1s6w-zqmd4                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-1422   5m54s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-zqmd4                          Created container agnhost\npersistent-local-volumes-test-1422   5m54s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-zqmd4                          Started container agnhost\npersistent-local-volumes-test-1422   5m44s       Normal    Scheduled                    pod/security-context-162ef576-3858-444e-9464-cb09a522c445                   Successfully assigned persistent-local-volumes-test-1422/security-context-162ef576-3858-444e-9464-cb09a522c445 to bootstrap-e2e-minion-group-1s6w\npersistent-local-volumes-test-1422   5m42s       Normal    Pulled                       pod/security-context-162ef576-3858-444e-9464-cb09a522c445                   Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-1422   5m41s       Normal    Created                      pod/security-context-162ef576-3858-444e-9464-cb09a522c445                   Created container write-pod\npersistent-local-volumes-test-1422   5m41s       Normal    Started                      pod/security-context-162ef576-3858-444e-9464-cb09a522c445                   Started container write-pod\npersistent-local-volumes-test-1422   5m30s       Normal    Killing                      pod/security-context-162ef576-3858-444e-9464-cb09a522c445                   Stopping container write-pod\npersistent-local-volumes-test-2017   5m45s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-1s6w-k4gm6                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-2017   5m45s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-k4gm6                          Created container agnhost\npersistent-local-volumes-test-2017   5m45s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-k4gm6                          Started container agnhost\npersistent-local-volumes-test-2207   3m56s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-1s6w-tc68z                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-2207   3m56s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-tc68z                          Created container agnhost\npersistent-local-volumes-test-2207   3m53s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-tc68z                          Started container agnhost\npersistent-local-volumes-test-2571   4m27s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-1s6w-sq7sk                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-2571   4m27s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-sq7sk                          Created container agnhost\npersistent-local-volumes-test-2571   4m25s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-sq7sk                          Started container agnhost\npersistent-local-volumes-test-2571   4m17s       Warning   ProvisioningFailed           persistentvolumeclaim/pvc-8r9sk                                             no volume plugin matched\npersistent-local-volumes-test-2571   3m47s       Normal    Scheduled                    pod/security-context-0344875a-5ca3-4612-bf3f-9e1107b2efae                   Successfully assigned persistent-local-volumes-test-2571/security-context-0344875a-5ca3-4612-bf3f-9e1107b2efae to bootstrap-e2e-minion-group-1s6w\npersistent-local-volumes-test-2571   3m44s       Normal    Pulled                       pod/security-context-0344875a-5ca3-4612-bf3f-9e1107b2efae                   Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-2571   3m44s       Normal    Created                      pod/security-context-0344875a-5ca3-4612-bf3f-9e1107b2efae                   Created container write-pod\npersistent-local-volumes-test-2571   3m41s       Normal    Started                      pod/security-context-0344875a-5ca3-4612-bf3f-9e1107b2efae                   Started container write-pod\npersistent-local-volumes-test-2571   3m24s       Normal    Killing                      pod/security-context-0344875a-5ca3-4612-bf3f-9e1107b2efae                   Stopping container write-pod\npersistent-local-volumes-test-2571   4m9s        Normal    Scheduled                    pod/security-context-c7bf4545-5aba-4b3b-917b-f6e94d8ad7e6                   Successfully assigned persistent-local-volumes-test-2571/security-context-c7bf4545-5aba-4b3b-917b-f6e94d8ad7e6 to bootstrap-e2e-minion-group-1s6w\npersistent-local-volumes-test-2571   4m4s        Normal    Pulled                       pod/security-context-c7bf4545-5aba-4b3b-917b-f6e94d8ad7e6                   Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-2571   4m4s        Normal    Created                      pod/security-context-c7bf4545-5aba-4b3b-917b-f6e94d8ad7e6                   Created container write-pod\npersistent-local-volumes-test-2571   4m3s        Normal    Started                      pod/security-context-c7bf4545-5aba-4b3b-917b-f6e94d8ad7e6                   Started container write-pod\npersistent-local-volumes-test-2571   3m25s       Normal    Killing                      pod/security-context-c7bf4545-5aba-4b3b-917b-f6e94d8ad7e6                   Stopping container write-pod\npersistent-local-volumes-test-2773   5m8s        Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-1s6w-bt762                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-2773   5m8s        Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-bt762                          Created container agnhost\npersistent-local-volumes-test-2773   5m7s        Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-bt762                          Started container agnhost\npersistent-local-volumes-test-2773   4m49s       Normal    Scheduled                    pod/security-context-713c26f9-23fa-44b1-816f-760eae8a5713                   Successfully assigned persistent-local-volumes-test-2773/security-context-713c26f9-23fa-44b1-816f-760eae8a5713 to bootstrap-e2e-minion-group-1s6w\npersistent-local-volumes-test-2773   4m48s       Normal    SuccessfulMountVolume        pod/security-context-713c26f9-23fa-44b1-816f-760eae8a5713                   MapVolume.MapPodDevice succeeded for volume \"local-pvhqmjp\" globalMapPath \"/var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/local-pvhqmjp\"\npersistent-local-volumes-test-2773   4m48s       Normal    SuccessfulMountVolume        pod/security-context-713c26f9-23fa-44b1-816f-760eae8a5713                   MapVolume.MapPodDevice succeeded for volume \"local-pvhqmjp\" volumeMapPath \"/var/lib/kubelet/pods/c695d586-7d92-4e40-88c3-7ced07c60471/volumeDevices/kubernetes.io~local-volume\"\npersistent-local-volumes-test-2773   4m45s       Normal    Pulled                       pod/security-context-713c26f9-23fa-44b1-816f-760eae8a5713                   Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-2773   4m45s       Normal    Created                      pod/security-context-713c26f9-23fa-44b1-816f-760eae8a5713                   Created container write-pod\npersistent-local-volumes-test-2773   4m42s       Normal    Started                      pod/security-context-713c26f9-23fa-44b1-816f-760eae8a5713                   Started container write-pod\npersistent-local-volumes-test-2773   4m20s       Normal    Killing                      pod/security-context-713c26f9-23fa-44b1-816f-760eae8a5713                   Stopping container write-pod\npersistent-local-volumes-test-2773   4m19s       Normal    Scheduled                    pod/security-context-95ae8858-10c2-4b5a-ac6d-ebfaa50e4e53                   Successfully assigned persistent-local-volumes-test-2773/security-context-95ae8858-10c2-4b5a-ac6d-ebfaa50e4e53 to bootstrap-e2e-minion-group-1s6w\npersistent-local-volumes-test-2773   4m18s       Normal    SuccessfulMountVolume        pod/security-context-95ae8858-10c2-4b5a-ac6d-ebfaa50e4e53                   MapVolume.MapPodDevice succeeded for volume \"local-pvhqmjp\" globalMapPath \"/var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/local-pvhqmjp\"\npersistent-local-volumes-test-2773   4m18s       Normal    SuccessfulMountVolume        pod/security-context-95ae8858-10c2-4b5a-ac6d-ebfaa50e4e53                   MapVolume.MapPodDevice succeeded for volume \"local-pvhqmjp\" volumeMapPath \"/var/lib/kubelet/pods/f7d1801b-7a74-4600-821b-440e925b6243/volumeDevices/kubernetes.io~local-volume\"\npersistent-local-volumes-test-2773   4m16s       Normal    Pulled                       pod/security-context-95ae8858-10c2-4b5a-ac6d-ebfaa50e4e53                   Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-2773   4m16s       Normal    Created                      pod/security-context-95ae8858-10c2-4b5a-ac6d-ebfaa50e4e53                   Created container write-pod\npersistent-local-volumes-test-2773   4m15s       Normal    Started                      pod/security-context-95ae8858-10c2-4b5a-ac6d-ebfaa50e4e53                   Started container write-pod\npersistent-local-volumes-test-2773   4m          Normal    Killing                      pod/security-context-95ae8858-10c2-4b5a-ac6d-ebfaa50e4e53                   Stopping container write-pod\npersistent-local-volumes-test-4308   31s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-1s6w-rt7h2                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-4308   31s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-rt7h2                          Created container agnhost\npersistent-local-volumes-test-4308   28s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-rt7h2                          Started container agnhost\npersistent-local-volumes-test-4308   17s         Normal    Scheduled                    pod/security-context-1a57620e-535e-43cc-bb50-5d18f7905731                   Successfully assigned persistent-local-volumes-test-4308/security-context-1a57620e-535e-43cc-bb50-5d18f7905731 to bootstrap-e2e-minion-group-1s6w\npersistent-local-volumes-test-4308   15s         Normal    Pulled                       pod/security-context-1a57620e-535e-43cc-bb50-5d18f7905731                   Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-4308   15s         Normal    Created                      pod/security-context-1a57620e-535e-43cc-bb50-5d18f7905731                   Created container write-pod\npersistent-local-volumes-test-4308   15s         Normal    Started                      pod/security-context-1a57620e-535e-43cc-bb50-5d18f7905731                   Started container write-pod\npersistent-local-volumes-test-4308   9s          Normal    Killing                      pod/security-context-1a57620e-535e-43cc-bb50-5d18f7905731                   Stopping container write-pod\npersistent-local-volumes-test-4308   8s          Normal    Scheduled                    pod/security-context-e3dde85c-7317-4c76-a190-2f8e3e1db0ee                   Successfully assigned persistent-local-volumes-test-4308/security-context-e3dde85c-7317-4c76-a190-2f8e3e1db0ee to bootstrap-e2e-minion-group-1s6w\npersistent-local-volumes-test-4308   3s          Normal    Pulled                       pod/security-context-e3dde85c-7317-4c76-a190-2f8e3e1db0ee                   Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-4308   3s          Normal    Created                      pod/security-context-e3dde85c-7317-4c76-a190-2f8e3e1db0ee                   Created container write-pod\npersistent-local-volumes-test-4534   5m29s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-1s6w-pd552                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-4534   5m29s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-pd552                          Created container agnhost\npersistent-local-volumes-test-4534   5m28s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-pd552                          Started container agnhost\npersistent-local-volumes-test-4534   5m2s        Normal    Scheduled                    pod/security-context-0459f49d-6168-44dd-bc75-cdae0a3b44f2                   Successfully assigned persistent-local-volumes-test-4534/security-context-0459f49d-6168-44dd-bc75-cdae0a3b44f2 to bootstrap-e2e-minion-group-1s6w\npersistent-local-volumes-test-4534   4m55s       Normal    Pulled                       pod/security-context-0459f49d-6168-44dd-bc75-cdae0a3b44f2                   Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-4534   4m54s       Normal    Created                      pod/security-context-0459f49d-6168-44dd-bc75-cdae0a3b44f2                   Created container write-pod\npersistent-local-volumes-test-4534   4m53s       Normal    Started                      pod/security-context-0459f49d-6168-44dd-bc75-cdae0a3b44f2                   Started container write-pod\npersistent-local-volumes-test-4534   4m45s       Normal    Killing                      pod/security-context-0459f49d-6168-44dd-bc75-cdae0a3b44f2                   Stopping container write-pod\npersistent-local-volumes-test-4534   5m15s       Normal    Scheduled                    pod/security-context-c9c0bb61-4dc9-414d-a158-0c81e6082107                   Successfully assigned persistent-local-volumes-test-4534/security-context-c9c0bb61-4dc9-414d-a158-0c81e6082107 to bootstrap-e2e-minion-group-1s6w\npersistent-local-volumes-test-4534   5m13s       Normal    Pulled                       pod/security-context-c9c0bb61-4dc9-414d-a158-0c81e6082107                   Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-4534   5m13s       Normal    Created                      pod/security-context-c9c0bb61-4dc9-414d-a158-0c81e6082107                   Created container write-pod\npersistent-local-volumes-test-4534   5m12s       Normal    Started                      pod/security-context-c9c0bb61-4dc9-414d-a158-0c81e6082107                   Started container write-pod\npersistent-local-volumes-test-4534   5m3s        Normal    Killing                      pod/security-context-c9c0bb61-4dc9-414d-a158-0c81e6082107                   Stopping container write-pod\npersistent-local-volumes-test-7700   3m2s        Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-1s6w-wbqxm                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-7700   3m2s        Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-wbqxm                          Created container agnhost\npersistent-local-volumes-test-7700   3m          Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-wbqxm                          Started container agnhost\npersistent-local-volumes-test-7700   2m44s       Normal    Scheduled                    pod/security-context-e132dea0-362c-498b-8aa4-6b7c1c99eb60                   Successfully assigned persistent-local-volumes-test-7700/security-context-e132dea0-362c-498b-8aa4-6b7c1c99eb60 to bootstrap-e2e-minion-group-1s6w\npersistent-local-volumes-test-7700   2m44s       Warning   FailedMount                  pod/security-context-e132dea0-362c-498b-8aa4-6b7c1c99eb60                   Unable to attach or mount volumes: unmounted volumes=[volume1], unattached volumes=[default-token-bhgg6 volume1]: error processing PVC persistent-local-volumes-test-7700/pvc-bvsm4: failed to fetch PVC from API server: persistentvolumeclaims \"pvc-bvsm4\" is forbidden: User \"system:node:bootstrap-e2e-minion-group-1s6w\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"persistent-local-volumes-test-7700\": no relationship found between node \"bootstrap-e2e-minion-group-1s6w\" and this object\npersistent-local-volumes-test-7700   2m43s       Normal    SuccessfulMountVolume        pod/security-context-e132dea0-362c-498b-8aa4-6b7c1c99eb60                   MapVolume.MapPodDevice succeeded for volume \"local-pvv5m2n\" globalMapPath \"/var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/local-pvv5m2n\"\npersistent-local-volumes-test-7700   2m43s       Normal    SuccessfulMountVolume        pod/security-context-e132dea0-362c-498b-8aa4-6b7c1c99eb60                   MapVolume.MapPodDevice succeeded for volume \"local-pvv5m2n\" volumeMapPath \"/var/lib/kubelet/pods/10269f75-5dbf-42a8-9744-bed54c1ec795/volumeDevices/kubernetes.io~local-volume\"\npersistent-local-volumes-test-7700   2m30s       Normal    Pulled                       pod/security-context-e132dea0-362c-498b-8aa4-6b7c1c99eb60                   Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-7700   2m30s       Normal    Created                      pod/security-context-e132dea0-362c-498b-8aa4-6b7c1c99eb60                   Created container write-pod\npersistent-local-volumes-test-7700   2m28s       Normal    Started                      pod/security-context-e132dea0-362c-498b-8aa4-6b7c1c99eb60                   Started container write-pod\npersistent-local-volumes-test-7700   2m21s       Normal    Killing                      pod/security-context-e132dea0-362c-498b-8aa4-6b7c1c99eb60                   Stopping container write-pod\npersistent-local-volumes-test-8035   5m16s       Warning   FailedMount                  pod/hostexec-bootstrap-e2e-minion-group-1s6w-f9ksf                          MountVolume.SetUp failed for volume \"default-token-7c8lv\" : failed to sync secret cache: timed out waiting for the condition\npersistent-local-volumes-test-8035   5m15s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-1s6w-f9ksf                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-8035   5m15s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-f9ksf                          Created container agnhost\npersistent-local-volumes-test-8035   5m15s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-f9ksf                          Started container agnhost\npersistent-local-volumes-test-8035   5m3s        Warning   ProvisioningFailed           persistentvolumeclaim/pvc-r8858                                             no volume plugin matched\npersistent-local-volumes-test-8035   4m27s       Normal    Scheduled                    pod/security-context-618c9e7d-1795-4cfc-9e98-26598a574ab0                   Successfully assigned persistent-local-volumes-test-8035/security-context-618c9e7d-1795-4cfc-9e98-26598a574ab0 to bootstrap-e2e-minion-group-1s6w\npersistent-local-volumes-test-8035   4m24s       Normal    Pulled                       pod/security-context-618c9e7d-1795-4cfc-9e98-26598a574ab0                   Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-8035   4m24s       Normal    Created                      pod/security-context-618c9e7d-1795-4cfc-9e98-26598a574ab0                   Created container write-pod\npersistent-local-volumes-test-8035   4m23s       Normal    Started                      pod/security-context-618c9e7d-1795-4cfc-9e98-26598a574ab0                   Started container write-pod\npersistent-local-volumes-test-8035   4m8s        Normal    Killing                      pod/security-context-618c9e7d-1795-4cfc-9e98-26598a574ab0                   Stopping container write-pod\npersistent-local-volumes-test-8035   4m51s       Normal    Scheduled                    pod/security-context-6d8eb68b-6146-4bbd-96ea-c8541ce888e6                   Successfully assigned persistent-local-volumes-test-8035/security-context-6d8eb68b-6146-4bbd-96ea-c8541ce888e6 to bootstrap-e2e-minion-group-1s6w\npersistent-local-volumes-test-8035   4m46s       Normal    Pulled                       pod/security-context-6d8eb68b-6146-4bbd-96ea-c8541ce888e6                   Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-8035   4m46s       Normal    Created                      pod/security-context-6d8eb68b-6146-4bbd-96ea-c8541ce888e6                   Created container write-pod\npersistent-local-volumes-test-8035   4m44s       Normal    Started                      pod/security-context-6d8eb68b-6146-4bbd-96ea-c8541ce888e6                   Started container write-pod\npersistent-local-volumes-test-8035   4m8s        Normal    Killing                      pod/security-context-6d8eb68b-6146-4bbd-96ea-c8541ce888e6                   Stopping container write-pod\npersistent-local-volumes-test-8692   2m3s        Warning   FailedMount                  pod/hostexec-bootstrap-e2e-minion-group-1s6w-wrqzd                          MountVolume.SetUp failed for volume \"default-token-7mqmt\" : failed to sync secret cache: timed out waiting for the condition\npersistent-local-volumes-test-8692   2m1s        Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-1s6w-wrqzd                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-8692   2m1s        Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-wrqzd                          Created container agnhost\npersistent-local-volumes-test-8692   2m          Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-wrqzd                          Started container agnhost\npersistent-local-volumes-test-8692   107s        Normal    Scheduled                    pod/security-context-aa27a0b6-0931-4b30-8947-3d36085c13f2                   Successfully assigned persistent-local-volumes-test-8692/security-context-aa27a0b6-0931-4b30-8947-3d36085c13f2 to bootstrap-e2e-minion-group-1s6w\npersistent-local-volumes-test-8692   106s        Warning   FailedMount                  pod/security-context-aa27a0b6-0931-4b30-8947-3d36085c13f2                   Unable to attach or mount volumes: unmounted volumes=[volume1], unattached volumes=[volume1 default-token-7mqmt]: error processing PVC persistent-local-volumes-test-8692/pvc-wq97c: failed to fetch PVC from API server: persistentvolumeclaims \"pvc-wq97c\" is forbidden: User \"system:node:bootstrap-e2e-minion-group-1s6w\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"persistent-local-volumes-test-8692\": no relationship found between node \"bootstrap-e2e-minion-group-1s6w\" and this object\npersistent-local-volumes-test-8692   91s         Normal    Pulled                       pod/security-context-aa27a0b6-0931-4b30-8947-3d36085c13f2                   Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-8692   91s         Normal    Created                      pod/security-context-aa27a0b6-0931-4b30-8947-3d36085c13f2                   Created container write-pod\npersistent-local-volumes-test-8692   90s         Normal    Started                      pod/security-context-aa27a0b6-0931-4b30-8947-3d36085c13f2                   Started container write-pod\npersistent-local-volumes-test-8692   84s         Normal    Killing                      pod/security-context-aa27a0b6-0931-4b30-8947-3d36085c13f2                   Stopping container write-pod\npersistent-local-volumes-test-9002   66s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-1s6w-vslgc                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-9002   66s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-vslgc                          Created container agnhost\npersistent-local-volumes-test-9002   64s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-vslgc                          Started container agnhost\npersistent-local-volumes-test-9002   52s         Normal    Scheduled                    pod/security-context-07718c54-2b57-45ed-8aeb-39d34212ab52                   Successfully assigned persistent-local-volumes-test-9002/security-context-07718c54-2b57-45ed-8aeb-39d34212ab52 to bootstrap-e2e-minion-group-1s6w\npersistent-local-volumes-test-9002   48s         Normal    Pulled                       pod/security-context-07718c54-2b57-45ed-8aeb-39d34212ab52                   Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-9002   48s         Normal    Created                      pod/security-context-07718c54-2b57-45ed-8aeb-39d34212ab52                   Created container write-pod\npersistent-local-volumes-test-9002   46s         Normal    Started                      pod/security-context-07718c54-2b57-45ed-8aeb-39d34212ab52                   Started container write-pod\npersistent-local-volumes-test-9002   36s         Normal    Killing                      pod/security-context-07718c54-2b57-45ed-8aeb-39d34212ab52                   Stopping container write-pod\npersistent-local-volumes-test-9062   68s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-1s6w-vqpl2                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-9062   68s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-vqpl2                          Created container agnhost\npersistent-local-volumes-test-9062   66s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-vqpl2                          Started container agnhost\npersistent-local-volumes-test-9062   48s         Normal    Scheduled                    pod/security-context-f7f0ff83-ab64-42da-b184-871ad52e221c                   Successfully assigned persistent-local-volumes-test-9062/security-context-f7f0ff83-ab64-42da-b184-871ad52e221c to bootstrap-e2e-minion-group-1s6w\npersistent-local-volumes-test-9062   44s         Normal    Pulled                       pod/security-context-f7f0ff83-ab64-42da-b184-871ad52e221c                   Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-9062   43s         Normal    Created                      pod/security-context-f7f0ff83-ab64-42da-b184-871ad52e221c                   Created container write-pod\npersistent-local-volumes-test-9062   43s         Normal    Started                      pod/security-context-f7f0ff83-ab64-42da-b184-871ad52e221c                   Started container write-pod\npersistent-local-volumes-test-9062   33s         Normal    Killing                      pod/security-context-f7f0ff83-ab64-42da-b184-871ad52e221c                   Stopping container write-pod\npersistent-local-volumes-test-9269   5m29s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-1s6w-84jk7                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-9269   5m29s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-84jk7                          Created container agnhost\npersistent-local-volumes-test-9269   5m28s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-84jk7                          Started container agnhost\npod-network-test-6935                5m7s        Normal    Scheduled                    pod/host-test-container-pod                                                 Successfully assigned pod-network-test-6935/host-test-container-pod to bootstrap-e2e-minion-group-dwjn\npod-network-test-6935                5m5s        Normal    Pulled                       pod/host-test-container-pod                                                 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npod-network-test-6935                5m5s        Normal    Created                      pod/host-test-container-pod                                                 Created container agnhost\npod-network-test-6935                5m4s        Normal    Started                      pod/host-test-container-pod                                                 Started container agnhost\npod-network-test-6935                5m46s       Normal    Scheduled                    pod/netserver-0                                                             Successfully assigned pod-network-test-6935/netserver-0 to bootstrap-e2e-minion-group-1s6w\npod-network-test-6935                5m43s       Normal    Pulled                       pod/netserver-0                                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npod-network-test-6935                5m43s       Normal    Created                      pod/netserver-0                                                             Created container webserver\npod-network-test-6935                5m42s       Normal    Started                      pod/netserver-0                                                             Started container webserver\npod-network-test-6935                5m46s       Normal    Scheduled                    pod/netserver-1                                                             Successfully assigned pod-network-test-6935/netserver-1 to bootstrap-e2e-minion-group-5wn8\npod-network-test-6935                5m41s       Normal    Pulled                       pod/netserver-1                                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npod-network-test-6935                5m41s       Normal    Created                      pod/netserver-1                                                             Created container webserver\npod-network-test-6935                5m40s       Normal    Started                      pod/netserver-1                                                             Started container webserver\npod-network-test-6935                5m45s       Normal    Scheduled                    pod/netserver-2                                                             Successfully assigned pod-network-test-6935/netserver-2 to bootstrap-e2e-minion-group-7htw\npod-network-test-6935                5m34s       Normal    Pulled                       pod/netserver-2                                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npod-network-test-6935                5m34s       Normal    Created                      pod/netserver-2                                                             Created container webserver\npod-network-test-6935                5m29s       Normal    Started                      pod/netserver-2                                                             Started container webserver\npod-network-test-6935                5m45s       Normal    Scheduled                    pod/netserver-3                                                             Successfully assigned pod-network-test-6935/netserver-3 to bootstrap-e2e-minion-group-dwjn\npod-network-test-6935                5m43s       Normal    Pulled                       pod/netserver-3                                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npod-network-test-6935                5m43s       Normal    Created                      pod/netserver-3                                                             Created container webserver\npod-network-test-6935                5m42s       Normal    Started                      pod/netserver-3                                                             Started container webserver\npod-network-test-6935                5m8s        Normal    Scheduled                    pod/test-container-pod                                                      Successfully assigned pod-network-test-6935/test-container-pod to bootstrap-e2e-minion-group-dwjn\npod-network-test-6935                5m4s        Normal    Pulled                       pod/test-container-pod                                                      Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npod-network-test-6935                5m4s        Normal    Created                      pod/test-container-pod                                                      Created container webserver\npod-network-test-6935                5m4s        Normal    Started                      pod/test-container-pod                                                      Started container webserver\npod-network-test-7910                3m51s       Normal    Scheduled                    pod/netserver-0                                                             Successfully assigned pod-network-test-7910/netserver-0 to bootstrap-e2e-minion-group-1s6w\npod-network-test-7910                3m48s       Normal    Pulled                       pod/netserver-0                                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npod-network-test-7910                3m48s       Normal    Created                      pod/netserver-0                                                             Created container webserver\npod-network-test-7910                3m46s       Normal    Started                      pod/netserver-0                                                             Started container webserver\npod-network-test-7910                3m51s       Normal    Scheduled                    pod/netserver-1                                                             Successfully assigned pod-network-test-7910/netserver-1 to bootstrap-e2e-minion-group-5wn8\npod-network-test-7910                3m49s       Normal    Pulled                       pod/netserver-1                                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npod-network-test-7910                3m49s       Normal    Created                      pod/netserver-1                                                             Created container webserver\npod-network-test-7910                3m49s       Normal    Started                      pod/netserver-1                                                             Started container webserver\npod-network-test-7910                3m50s       Normal    Scheduled                    pod/netserver-2                                                             Successfully assigned pod-network-test-7910/netserver-2 to bootstrap-e2e-minion-group-7htw\npod-network-test-7910                3m46s       Normal    Pulled                       pod/netserver-2                                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npod-network-test-7910                3m46s       Normal    Created                      pod/netserver-2                                                             Created container webserver\npod-network-test-7910                3m44s       Normal    Started                      pod/netserver-2                                                             Started container webserver\npod-network-test-7910                3m50s       Normal    Scheduled                    pod/netserver-3                                                             Successfully assigned pod-network-test-7910/netserver-3 to bootstrap-e2e-minion-group-dwjn\npod-network-test-7910                3m49s       Warning   FailedMount                  pod/netserver-3                                                             MountVolume.SetUp failed for volume \"default-token-lh6tf\" : failed to sync secret cache: timed out waiting for the condition\npod-network-test-7910                3m47s       Normal    Pulled                       pod/netserver-3                                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npod-network-test-7910                3m47s       Normal    Created                      pod/netserver-3                                                             Created container webserver\npod-network-test-7910                3m47s       Normal    Started                      pod/netserver-3                                                             Started container webserver\npod-network-test-7910                3m24s       Normal    Scheduled                    pod/test-container-pod                                                      Successfully assigned pod-network-test-7910/test-container-pod to bootstrap-e2e-minion-group-dwjn\npod-network-test-7910                3m23s       Normal    Pulled                       pod/test-container-pod                                                      Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npod-network-test-7910                3m23s       Normal    Created                      pod/test-container-pod                                                      Created container webserver\npod-network-test-7910                3m22s       Normal    Started                      pod/test-container-pod                                                      Started container webserver\npods-3596                            4m30s       Normal    Scheduled                    pod/pod-update-activedeadlineseconds-a9dd9446-85bd-45e5-9ad1-3462f513b07e   Successfully assigned pods-3596/pod-update-activedeadlineseconds-a9dd9446-85bd-45e5-9ad1-3462f513b07e to bootstrap-e2e-minion-group-1s6w\npods-3596                            4m28s       Normal    Pulled                       pod/pod-update-activedeadlineseconds-a9dd9446-85bd-45e5-9ad1-3462f513b07e   Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\npods-3596                            4m28s       Normal    Created                      pod/pod-update-activedeadlineseconds-a9dd9446-85bd-45e5-9ad1-3462f513b07e   Created container nginx\npods-3596                            4m27s       Normal    Started                      pod/pod-update-activedeadlineseconds-a9dd9446-85bd-45e5-9ad1-3462f513b07e   Started container nginx\npods-3596                            4m13s       Normal    DeadlineExceeded             pod/pod-update-activedeadlineseconds-a9dd9446-85bd-45e5-9ad1-3462f513b07e   Pod was active on the node longer than the specified deadline\npods-3596                            4m18s       Normal    Killing                      pod/pod-update-activedeadlineseconds-a9dd9446-85bd-45e5-9ad1-3462f513b07e   Stopping container nginx\npods-4023                            3m44s       Normal    Scheduled                    pod/pod-hostip-d7448d05-b328-48e6-bbbc-9cfff0705c08                         Successfully assigned pods-4023/pod-hostip-d7448d05-b328-48e6-bbbc-9cfff0705c08 to bootstrap-e2e-minion-group-1s6w\npods-4023                            3m39s       Normal    Pulled                       pod/pod-hostip-d7448d05-b328-48e6-bbbc-9cfff0705c08                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\npods-4023                            3m39s       Normal    Created                      pod/pod-hostip-d7448d05-b328-48e6-bbbc-9cfff0705c08                         Created container test\npods-4023                            3m37s       Normal    Started                      pod/pod-hostip-d7448d05-b328-48e6-bbbc-9cfff0705c08                         Started container test\nport-forwarding-328                  3m16s       Normal    Scheduled                    pod/pfpod                                                                   Successfully assigned port-forwarding-328/pfpod to bootstrap-e2e-minion-group-5wn8\nport-forwarding-328                  3m12s       Normal    Pulled                       pod/pfpod                                                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nport-forwarding-328                  3m12s       Normal    Created                      pod/pfpod                                                                   Created container readiness\nport-forwarding-328                  3m10s       Normal    Started                      pod/pfpod                                                                   Started container readiness\nport-forwarding-328                  3m10s       Normal    Pulled                       pod/pfpod                                                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nport-forwarding-328                  3m9s        Normal    Created                      pod/pfpod                                                                   Created container portforwardtester\nport-forwarding-328                  3m9s        Normal    Started                      pod/pfpod                                                                   Started container portforwardtester\nport-forwarding-328                  2m17s       Warning   Unhealthy                    pod/pfpod                                                                   Readiness probe failed:\nport-forwarding-4263                 4m20s       Normal    Scheduled                    pod/pfpod                                                                   Successfully assigned port-forwarding-4263/pfpod to bootstrap-e2e-minion-group-1s6w\nport-forwarding-4263                 4m17s       Normal    Pulled                       pod/pfpod                                                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nport-forwarding-4263                 4m17s       Normal    Created                      pod/pfpod                                                                   Created container readiness\nport-forwarding-4263                 4m17s       Normal    Started                      pod/pfpod                                                                   Started container readiness\nport-forwarding-4263                 4m17s       Normal    Pulled                       pod/pfpod                                                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nport-forwarding-4263                 4m16s       Normal    Created                      pod/pfpod                                                                   Created container portforwardtester\nport-forwarding-4263                 4m15s       Normal    Started                      pod/pfpod                                                                   Started container portforwardtester\nport-forwarding-4263                 3m25s       Warning   Unhealthy                    pod/pfpod                                                                   Readiness probe failed:\nport-forwarding-5073                 95s         Normal    Scheduled                    pod/pfpod                                                                   Successfully assigned port-forwarding-5073/pfpod to bootstrap-e2e-minion-group-5wn8\nport-forwarding-5073                 91s         Normal    Pulled                       pod/pfpod                                                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nport-forwarding-5073                 91s         Normal    Created                      pod/pfpod                                                                   Created container readiness\nport-forwarding-5073                 91s         Normal    Started                      pod/pfpod                                                                   Started container readiness\nport-forwarding-5073                 91s         Normal    Pulled                       pod/pfpod                                                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nport-forwarding-5073                 91s         Normal    Created                      pod/pfpod                                                                   Created container portforwardtester\nport-forwarding-5073                 90s         Normal    Started                      pod/pfpod                                                                   Started container portforwardtester\nport-forwarding-5073                 61s         Warning   Unhealthy                    pod/pfpod                                                                   Readiness probe failed:\nprojected-1167                       6m34s       Normal    Scheduled                    pod/pod-projected-configmaps-bc41b1c4-ff6a-44df-8c5c-67b2ec8fd664           Successfully assigned projected-1167/pod-projected-configmaps-bc41b1c4-ff6a-44df-8c5c-67b2ec8fd664 to bootstrap-e2e-minion-group-1s6w\nprojected-1167                       6m29s       Normal    Pulled                       pod/pod-projected-configmaps-bc41b1c4-ff6a-44df-8c5c-67b2ec8fd664           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprojected-1167                       6m28s       Normal    Created                      pod/pod-projected-configmaps-bc41b1c4-ff6a-44df-8c5c-67b2ec8fd664           Created container projected-configmap-volume-test\nprojected-1167                       6m25s       Warning   Failed                       pod/pod-projected-configmaps-bc41b1c4-ff6a-44df-8c5c-67b2ec8fd664           Error: failed to start container \"projected-configmap-volume-test\": Error response from daemon: OCI runtime start failed: container process is already dead: unknown\nprojected-2791                       2m57s       Normal    Scheduled                    pod/metadata-volume-9811babc-17d2-4a34-b963-5fc8df1b891b                    Successfully assigned projected-2791/metadata-volume-9811babc-17d2-4a34-b963-5fc8df1b891b to bootstrap-e2e-minion-group-5wn8\nprojected-2791                       2m53s       Normal    Pulled                       pod/metadata-volume-9811babc-17d2-4a34-b963-5fc8df1b891b                    Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprojected-2791                       2m52s       Normal    Created                      pod/metadata-volume-9811babc-17d2-4a34-b963-5fc8df1b891b                    Created container client-container\nprojected-2791                       2m51s       Normal    Started                      pod/metadata-volume-9811babc-17d2-4a34-b963-5fc8df1b891b                    Started container client-container\nprojected-38                         65s         Normal    Scheduled                    pod/pod-projected-configmaps-318e7c1b-d664-45f1-aa12-7cc128246a34           Successfully assigned projected-38/pod-projected-configmaps-318e7c1b-d664-45f1-aa12-7cc128246a34 to bootstrap-e2e-minion-group-7htw\nprojected-38                         62s         Normal    Pulled                       pod/pod-projected-configmaps-318e7c1b-d664-45f1-aa12-7cc128246a34           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprojected-38                         62s         Normal    Created                      pod/pod-projected-configmaps-318e7c1b-d664-45f1-aa12-7cc128246a34           Created container projected-configmap-volume-test\nprojected-38                         62s         Normal    Started                      pod/pod-projected-configmaps-318e7c1b-d664-45f1-aa12-7cc128246a34           Started container projected-configmap-volume-test\nprojected-3942                       5m53s       Normal    Scheduled                    pod/downwardapi-volume-2ccdcc71-abef-4d69-930c-67baf1228348                 Successfully assigned projected-3942/downwardapi-volume-2ccdcc71-abef-4d69-930c-67baf1228348 to bootstrap-e2e-minion-group-5wn8\nprojected-3942                       5m51s       Normal    Pulled                       pod/downwardapi-volume-2ccdcc71-abef-4d69-930c-67baf1228348                 Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprojected-3942                       5m51s       Normal    Created                      pod/downwardapi-volume-2ccdcc71-abef-4d69-930c-67baf1228348                 Created container client-container\nprojected-3942                       5m49s       Normal    Started                      pod/downwardapi-volume-2ccdcc71-abef-4d69-930c-67baf1228348                 Started container client-container\nprojected-5642                       60s         Normal    Scheduled                    pod/pod-projected-secrets-d6e83335-b10a-4f2f-8ffa-595eff17bffe              Successfully assigned projected-5642/pod-projected-secrets-d6e83335-b10a-4f2f-8ffa-595eff17bffe to bootstrap-e2e-minion-group-7htw\nprojected-5642                       57s         Normal    Pulled                       pod/pod-projected-secrets-d6e83335-b10a-4f2f-8ffa-595eff17bffe              Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprojected-5642                       57s         Normal    Created                      pod/pod-projected-secrets-d6e83335-b10a-4f2f-8ffa-595eff17bffe              Created container projected-secret-volume-test\nprojected-5642                       55s         Normal    Started                      pod/pod-projected-secrets-d6e83335-b10a-4f2f-8ffa-595eff17bffe              Started container projected-secret-volume-test\nprojected-6463                       5m52s       Normal    Scheduled                    pod/pod-projected-configmaps-a57394a6-a2fb-431e-9400-21f63fc3acdd           Successfully assigned projected-6463/pod-projected-configmaps-a57394a6-a2fb-431e-9400-21f63fc3acdd to bootstrap-e2e-minion-group-dwjn\nprojected-6463                       5m51s       Warning   FailedMount                  pod/pod-projected-configmaps-a57394a6-a2fb-431e-9400-21f63fc3acdd           MountVolume.SetUp failed for volume \"projected-configmap-volume\" : failed to sync configmap cache: timed out waiting for the condition\nprojected-6463                       5m51s       Warning   FailedMount                  pod/pod-projected-configmaps-a57394a6-a2fb-431e-9400-21f63fc3acdd           MountVolume.SetUp failed for volume \"default-token-zbk2n\" : failed to sync secret cache: timed out waiting for the condition\nprojected-6463                       5m49s       Normal    Pulled                       pod/pod-projected-configmaps-a57394a6-a2fb-431e-9400-21f63fc3acdd           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprojected-6463                       5m49s       Normal    Created                      pod/pod-projected-configmaps-a57394a6-a2fb-431e-9400-21f63fc3acdd           Created container projected-configmap-volume-test\nprojected-6463                       5m49s       Normal    Started                      pod/pod-projected-configmaps-a57394a6-a2fb-431e-9400-21f63fc3acdd           Started container projected-configmap-volume-test\nprojected-698                        3m17s       Normal    Scheduled                    pod/pod-projected-configmaps-ba521a63-a2fe-461e-bd5f-bc4206e99872           Successfully assigned projected-698/pod-projected-configmaps-ba521a63-a2fe-461e-bd5f-bc4206e99872 to bootstrap-e2e-minion-group-1s6w\nprojected-698                        3m11s       Normal    Pulled                       pod/pod-projected-configmaps-ba521a63-a2fe-461e-bd5f-bc4206e99872           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprojected-698                        3m11s       Normal    Created                      pod/pod-projected-configmaps-ba521a63-a2fe-461e-bd5f-bc4206e99872           Created container projected-configmap-volume-test\nprojected-698                        3m10s       Normal    Started                      pod/pod-projected-configmaps-ba521a63-a2fe-461e-bd5f-bc4206e99872           Started container projected-configmap-volume-test\nprojected-7239                       3m58s       Normal    Scheduled                    pod/annotationupdate39435967-13a0-4bbd-a6c1-df38664b154d                    Successfully assigned projected-7239/annotationupdate39435967-13a0-4bbd-a6c1-df38664b154d to bootstrap-e2e-minion-group-dwjn\nprojected-7239                       3m57s       Normal    Pulled                       pod/annotationupdate39435967-13a0-4bbd-a6c1-df38664b154d                    Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprojected-7239                       3m57s       Normal    Created                      pod/annotationupdate39435967-13a0-4bbd-a6c1-df38664b154d                    Created container client-container\nprojected-7239                       3m56s       Normal    Started                      pod/annotationupdate39435967-13a0-4bbd-a6c1-df38664b154d                    Started container client-container\nprojected-7530                       6m7s        Normal    Scheduled                    pod/pod-projected-configmaps-43b0100b-ec66-43a9-ad4e-ea4f096cbe6d           Successfully assigned projected-7530/pod-projected-configmaps-43b0100b-ec66-43a9-ad4e-ea4f096cbe6d to bootstrap-e2e-minion-group-dwjn\nprojected-7530                       6m6s        Normal    Pulled                       pod/pod-projected-configmaps-43b0100b-ec66-43a9-ad4e-ea4f096cbe6d           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprojected-7530                       6m6s        Normal    Created                      pod/pod-projected-configmaps-43b0100b-ec66-43a9-ad4e-ea4f096cbe6d           Created container delcm-volume-test\nprojected-7530                       6m6s        Normal    Started                      pod/pod-projected-configmaps-43b0100b-ec66-43a9-ad4e-ea4f096cbe6d           Started container delcm-volume-test\nprojected-7530                       6m6s        Normal    Pulled                       pod/pod-projected-configmaps-43b0100b-ec66-43a9-ad4e-ea4f096cbe6d           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprojected-7530                       6m6s        Normal    Created                      pod/pod-projected-configmaps-43b0100b-ec66-43a9-ad4e-ea4f096cbe6d           Created container updcm-volume-test\nprojected-7530                       6m5s        Normal    Started                      pod/pod-projected-configmaps-43b0100b-ec66-43a9-ad4e-ea4f096cbe6d           Started container updcm-volume-test\nprojected-7530                       6m5s        Normal    Pulled                       pod/pod-projected-configmaps-43b0100b-ec66-43a9-ad4e-ea4f096cbe6d           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprojected-7530                       6m5s        Normal    Created                      pod/pod-projected-configmaps-43b0100b-ec66-43a9-ad4e-ea4f096cbe6d           Created container createcm-volume-test\nprojected-7530                       6m4s        Normal    Started                      pod/pod-projected-configmaps-43b0100b-ec66-43a9-ad4e-ea4f096cbe6d           Started container createcm-volume-test\nprojected-767                        2m38s       Normal    Scheduled                    pod/downwardapi-volume-97a0aa4a-0c32-4b84-98e8-8f4bda1536d7                 Successfully assigned projected-767/downwardapi-volume-97a0aa4a-0c32-4b84-98e8-8f4bda1536d7 to bootstrap-e2e-minion-group-5wn8\nprojected-767                        2m32s       Normal    Pulled                       pod/downwardapi-volume-97a0aa4a-0c32-4b84-98e8-8f4bda1536d7                 Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprojected-767                        2m31s       Normal    Created                      pod/downwardapi-volume-97a0aa4a-0c32-4b84-98e8-8f4bda1536d7                 Created container client-container\nprojected-767                        2m29s       Normal    Started                      pod/downwardapi-volume-97a0aa4a-0c32-4b84-98e8-8f4bda1536d7                 Started container client-container\nprojected-7840                       5m34s       Normal    Scheduled                    pod/downwardapi-volume-e9dfecc1-8dfd-4e0d-99e2-ce4201c96074                 Successfully assigned projected-7840/downwardapi-volume-e9dfecc1-8dfd-4e0d-99e2-ce4201c96074 to bootstrap-e2e-minion-group-dwjn\nprojected-7840                       5m33s       Normal    Pulled                       pod/downwardapi-volume-e9dfecc1-8dfd-4e0d-99e2-ce4201c96074                 Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprojected-7840                       5m33s       Normal    Created                      pod/downwardapi-volume-e9dfecc1-8dfd-4e0d-99e2-ce4201c96074                 Created container client-container\nprojected-7840                       5m32s       Normal    Started                      pod/downwardapi-volume-e9dfecc1-8dfd-4e0d-99e2-ce4201c96074                 Started container client-container\nprojected-8172                       4m9s        Normal    Scheduled                    pod/pod-projected-secrets-9aae190f-bde4-40f1-a480-c29d9fcdb3a6              Successfully assigned projected-8172/pod-projected-secrets-9aae190f-bde4-40f1-a480-c29d9fcdb3a6 to bootstrap-e2e-minion-group-5wn8\nprojected-8172                       4m7s        Normal    Pulled                       pod/pod-projected-secrets-9aae190f-bde4-40f1-a480-c29d9fcdb3a6              Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprojected-8172                       4m7s        Normal    Created                      pod/pod-projected-secrets-9aae190f-bde4-40f1-a480-c29d9fcdb3a6              Created container projected-secret-volume-test\nprojected-8172                       4m7s        Normal    Started                      pod/pod-projected-secrets-9aae190f-bde4-40f1-a480-c29d9fcdb3a6              Started container projected-secret-volume-test\nprovisioning-1358                    5m22s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-7htw-knt4x                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-1358                    5m22s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-7htw-knt4x                          Created container agnhost\nprovisioning-1358                    5m17s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-7htw-knt4x                          Started container agnhost\nprovisioning-1358                    4m3s        Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-7htw-knt4x                          Stopping container agnhost\nprovisioning-1358                    4m32s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-dqxm                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-1358                    4m32s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-dqxm                                  Created container init-volume-preprovisionedpv-dqxm\nprovisioning-1358                    4m29s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-dqxm                                  Started container init-volume-preprovisionedpv-dqxm\nprovisioning-1358                    4m27s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-dqxm                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-1358                    4m27s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-dqxm                                  Created container test-init-subpath-preprovisionedpv-dqxm\nprovisioning-1358                    4m25s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-dqxm                                  Started container test-init-subpath-preprovisionedpv-dqxm\nprovisioning-1358                    4m23s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-dqxm                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-1358                    4m23s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-dqxm                                  Created container test-container-subpath-preprovisionedpv-dqxm\nprovisioning-1358                    4m21s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-dqxm                                  Started container test-container-subpath-preprovisionedpv-dqxm\nprovisioning-1358                    4m21s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-dqxm                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-1358                    4m21s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-dqxm                                  Created container test-container-volume-preprovisionedpv-dqxm\nprovisioning-1358                    4m18s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-dqxm                                  Started container test-container-volume-preprovisionedpv-dqxm\nprovisioning-1358                    4m52s       Warning   ProvisioningFailed           persistentvolumeclaim/pvc-4s7qf                                             storageclass.storage.k8s.io \"provisioning-1358\" not found\nprovisioning-1620                    8s          Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-7htw-hqv57                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-1620                    8s          Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-7htw-hqv57                          Created container agnhost\nprovisioning-1620                    7s          Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-7htw-hqv57                          Started container agnhost\nprovisioning-184                     56s         Normal    Scheduled                    pod/gluster-server                                                          Successfully assigned provisioning-184/gluster-server to bootstrap-e2e-minion-group-7htw\nprovisioning-184                     51s         Normal    Pulled                       pod/gluster-server                                                          Container image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\" already present on machine\nprovisioning-184                     51s         Normal    Created                      pod/gluster-server                                                          Created container gluster-server\nprovisioning-184                     50s         Normal    Started                      pod/gluster-server                                                          Started container gluster-server\nprovisioning-184                     19s         Normal    Killing                      pod/gluster-server                                                          Stopping container gluster-server\nprovisioning-184                     42s         Normal    Scheduled                    pod/pod-subpath-test-inlinevolume-bvsr                                      Successfully assigned provisioning-184/pod-subpath-test-inlinevolume-bvsr to bootstrap-e2e-minion-group-7htw\nprovisioning-184                     37s         Normal    Pulled                       pod/pod-subpath-test-inlinevolume-bvsr                                      Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-184                     37s         Normal    Created                      pod/pod-subpath-test-inlinevolume-bvsr                                      Created container init-volume-inlinevolume-bvsr\nprovisioning-184                     35s         Normal    Started                      pod/pod-subpath-test-inlinevolume-bvsr                                      Started container init-volume-inlinevolume-bvsr\nprovisioning-184                     32s         Normal    Pulled                       pod/pod-subpath-test-inlinevolume-bvsr                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-184                     32s         Normal    Created                      pod/pod-subpath-test-inlinevolume-bvsr                                      Created container test-init-volume-inlinevolume-bvsr\nprovisioning-184                     30s         Normal    Started                      pod/pod-subpath-test-inlinevolume-bvsr                                      Started container test-init-volume-inlinevolume-bvsr\nprovisioning-184                     29s         Normal    Pulled                       pod/pod-subpath-test-inlinevolume-bvsr                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-184                     28s         Normal    Created                      pod/pod-subpath-test-inlinevolume-bvsr                                      Created container test-container-subpath-inlinevolume-bvsr\nprovisioning-184                     26s         Normal    Started                      pod/pod-subpath-test-inlinevolume-bvsr                                      Started container test-container-subpath-inlinevolume-bvsr\nprovisioning-2312                    3m16s       Normal    Scheduled                    pod/gluster-server                                                          Successfully assigned provisioning-2312/gluster-server to bootstrap-e2e-minion-group-5wn8\nprovisioning-2312                    3m11s       Normal    Pulled                       pod/gluster-server                                                          Container image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\" already present on machine\nprovisioning-2312                    3m11s       Normal    Created                      pod/gluster-server                                                          Created container gluster-server\nprovisioning-2312                    3m9s        Normal    Started                      pod/gluster-server                                                          Started container gluster-server\nprovisioning-2312                    2m43s       Normal    Killing                      pod/gluster-server                                                          Stopping container gluster-server\nprovisioning-2312                    2m53s       Normal    Scheduled                    pod/pod-subpath-test-preprovisionedpv-b8n2                                  Successfully assigned provisioning-2312/pod-subpath-test-preprovisionedpv-b8n2 to bootstrap-e2e-minion-group-dwjn\nprovisioning-2312                    2m51s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-b8n2                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-2312                    2m51s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-b8n2                                  Created container init-volume-preprovisionedpv-b8n2\nprovisioning-2312                    2m50s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-b8n2                                  Started container init-volume-preprovisionedpv-b8n2\nprovisioning-2312                    2m50s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-b8n2                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-2312                    2m50s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-b8n2                                  Created container test-init-volume-preprovisionedpv-b8n2\nprovisioning-2312                    2m49s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-b8n2                                  Started container test-init-volume-preprovisionedpv-b8n2\nprovisioning-2312                    2m49s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-b8n2                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-2312                    2m48s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-b8n2                                  Created container test-container-subpath-preprovisionedpv-b8n2\nprovisioning-2312                    2m48s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-b8n2                                  Started container test-container-subpath-preprovisionedpv-b8n2\nprovisioning-2312                    3m5s        Warning   ProvisioningFailed           persistentvolumeclaim/pvc-xp4fl                                             storageclass.storage.k8s.io \"provisioning-2312\" not found\nprovisioning-2443                    35s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-7htw-gs5mq                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-2443                    35s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-7htw-gs5mq                          Created container agnhost\nprovisioning-2443                    33s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-7htw-gs5mq                          Started container agnhost\nprovisioning-2443                    17s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-n9rv                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-2443                    17s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-n9rv                                  Created container init-volume-preprovisionedpv-n9rv\nprovisioning-2443                    15s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-n9rv                                  Started container init-volume-preprovisionedpv-n9rv\nprovisioning-2443                    14s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-n9rv                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-2443                    14s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-n9rv                                  Created container test-init-subpath-preprovisionedpv-n9rv\nprovisioning-2443                    12s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-n9rv                                  Started container test-init-subpath-preprovisionedpv-n9rv\nprovisioning-2443                    11s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-n9rv                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-2443                    10s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-n9rv                                  Created container test-container-subpath-preprovisionedpv-n9rv\nprovisioning-2443                    8s          Normal    Started                      pod/pod-subpath-test-preprovisionedpv-n9rv                                  Started container test-container-subpath-preprovisionedpv-n9rv\nprovisioning-2443                    29s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-r42bj                                             storageclass.storage.k8s.io \"provisioning-2443\" not found\nprovisioning-3242                    3m52s       Normal    WaitForFirstConsumer         persistentvolumeclaim/gcepds5jhx                                            waiting for first consumer to be created before binding\nprovisioning-3242                    3m48s       Normal    ProvisioningSucceeded        persistentvolumeclaim/gcepds5jhx                                            Successfully provisioned volume pvc-3bbd6ae3-5ff2-408c-b1f7-162c067060f4 using kubernetes.io/gce-pd\nprovisioning-3242                    3m47s       Normal    Scheduled                    pod/pod-subpath-test-dynamicpv-zctl                                         Successfully assigned provisioning-3242/pod-subpath-test-dynamicpv-zctl to bootstrap-e2e-minion-group-1s6w\nprovisioning-3242                    3m40s       Normal    SuccessfulAttachVolume       pod/pod-subpath-test-dynamicpv-zctl                                         AttachVolume.Attach succeeded for volume \"pvc-3bbd6ae3-5ff2-408c-b1f7-162c067060f4\"\nprovisioning-3242                    3m31s       Normal    Pulled                       pod/pod-subpath-test-dynamicpv-zctl                                         Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-3242                    3m30s       Normal    Created                      pod/pod-subpath-test-dynamicpv-zctl                                         Created container test-init-subpath-dynamicpv-zctl\nprovisioning-3242                    3m29s       Normal    Started                      pod/pod-subpath-test-dynamicpv-zctl                                         Started container test-init-subpath-dynamicpv-zctl\nprovisioning-3242                    3m28s       Normal    Pulled                       pod/pod-subpath-test-dynamicpv-zctl                                         Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-3242                    3m28s       Normal    Created                      pod/pod-subpath-test-dynamicpv-zctl                                         Created container test-container-subpath-dynamicpv-zctl\nprovisioning-3242                    3m27s       Normal    Started                      pod/pod-subpath-test-dynamicpv-zctl                                         Started container test-container-subpath-dynamicpv-zctl\nprovisioning-3242                    3m27s       Normal    Pulled                       pod/pod-subpath-test-dynamicpv-zctl                                         Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-3242                    3m27s       Normal    Created                      pod/pod-subpath-test-dynamicpv-zctl                                         Created container test-container-volume-dynamicpv-zctl\nprovisioning-3242                    3m27s       Normal    Started                      pod/pod-subpath-test-dynamicpv-zctl                                         Started container test-container-volume-dynamicpv-zctl\nprovisioning-3846                    5m          Normal    Scheduled                    pod/gluster-server                                                          Successfully assigned provisioning-3846/gluster-server to bootstrap-e2e-minion-group-5wn8\nprovisioning-3846                    4m59s       Normal    Pulling                      pod/gluster-server                                                          Pulling image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\"\nprovisioning-3846                    4m42s       Normal    Pulled                       pod/gluster-server                                                          Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\"\nprovisioning-3846                    4m42s       Normal    Created                      pod/gluster-server                                                          Created container gluster-server\nprovisioning-3846                    4m41s       Normal    Started                      pod/gluster-server                                                          Started container gluster-server\nprovisioning-3846                    4m12s       Normal    Killing                      pod/gluster-server                                                          Stopping container gluster-server\nprovisioning-3846                    4m22s       Normal    Scheduled                    pod/pod-subpath-test-preprovisionedpv-ktml                                  Successfully assigned provisioning-3846/pod-subpath-test-preprovisionedpv-ktml to bootstrap-e2e-minion-group-5wn8\nprovisioning-3846                    4m20s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-ktml                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-3846                    4m20s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-ktml                                  Created container init-volume-preprovisionedpv-ktml\nprovisioning-3846                    4m19s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-ktml                                  Started container init-volume-preprovisionedpv-ktml\nprovisioning-3846                    4m18s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-ktml                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-3846                    4m18s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-ktml                                  Created container test-container-subpath-preprovisionedpv-ktml\nprovisioning-3846                    4m18s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-ktml                                  Started container test-container-subpath-preprovisionedpv-ktml\nprovisioning-3846                    4m36s       Warning   ProvisioningFailed           persistentvolumeclaim/pvc-hmr2j                                             storageclass.storage.k8s.io \"provisioning-3846\" not found\nprovisioning-4019                    2m25s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-dwjn-swxpr                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-4019                    2m25s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-dwjn-swxpr                          Created container agnhost\nprovisioning-4019                    2m25s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-dwjn-swxpr                          Started container agnhost\nprovisioning-4019                    109s        Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-dwjn-swxpr                          Stopping container agnhost\nprovisioning-4019                    2m3s        Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-dq6t                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-4019                    2m3s        Normal    Created                      pod/pod-subpath-test-preprovisionedpv-dq6t                                  Created container init-volume-preprovisionedpv-dq6t\nprovisioning-4019                    2m2s        Normal    Started                      pod/pod-subpath-test-preprovisionedpv-dq6t                                  Started container init-volume-preprovisionedpv-dq6t\nprovisioning-4019                    2m          Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-dq6t                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4019                    2m          Normal    Created                      pod/pod-subpath-test-preprovisionedpv-dq6t                                  Created container test-init-volume-preprovisionedpv-dq6t\nprovisioning-4019                    2m          Normal    Started                      pod/pod-subpath-test-preprovisionedpv-dq6t                                  Started container test-init-volume-preprovisionedpv-dq6t\nprovisioning-4019                    119s        Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-dq6t                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4019                    119s        Normal    Created                      pod/pod-subpath-test-preprovisionedpv-dq6t                                  Created container test-container-subpath-preprovisionedpv-dq6t\nprovisioning-4019                    119s        Normal    Started                      pod/pod-subpath-test-preprovisionedpv-dq6t                                  Started container test-container-subpath-preprovisionedpv-dq6t\nprovisioning-4019                    2m18s       Warning   ProvisioningFailed           persistentvolumeclaim/pvc-2sp8n                                             storageclass.storage.k8s.io \"provisioning-4019\" not found\nprovisioning-4285                    2m4s        Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-5wn8-txr5d                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-4285                    2m4s        Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-txr5d                          Created container agnhost\nprovisioning-4285                    2m4s        Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-txr5d                          Started container agnhost\nprovisioning-4285                    68s         Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-txr5d                          Stopping container agnhost\nprovisioning-4285                    94s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-q548                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4285                    94s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-q548                                  Created container test-container-subpath-preprovisionedpv-q548\nprovisioning-4285                    93s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-q548                                  Started container test-container-subpath-preprovisionedpv-q548\nprovisioning-4285                    93s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-q548                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-4285                    93s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-q548                                  Created container test-container-volume-preprovisionedpv-q548\nprovisioning-4285                    92s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-q548                                  Started container test-container-volume-preprovisionedpv-q548\nprovisioning-4285                    87s         Normal    Killing                      pod/pod-subpath-test-preprovisionedpv-q548                                  Stopping container test-container-volume-preprovisionedpv-q548\nprovisioning-4285                    117s        Warning   ProvisioningFailed           persistentvolumeclaim/pvc-gsx7r                                             storageclass.storage.k8s.io \"provisioning-4285\" not found\nprovisioning-4326                    3m26s       Normal    Pulled                       pod/csi-hostpath-attacher-0                                                 Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\nprovisioning-4326                    3m25s       Normal    Created                      pod/csi-hostpath-attacher-0                                                 Created container csi-attacher\nprovisioning-4326                    3m23s       Normal    Started                      pod/csi-hostpath-attacher-0                                                 Started container csi-attacher\nprovisioning-4326                    2m38s       Normal    Killing                      pod/csi-hostpath-attacher-0                                                 Stopping container csi-attacher\nprovisioning-4326                    3m36s       Warning   FailedCreate                 statefulset/csi-hostpath-attacher                                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-4326                    3m34s       Normal    SuccessfulCreate             statefulset/csi-hostpath-attacher                                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nprovisioning-4326                    3m26s       Normal    Pulled                       pod/csi-hostpath-provisioner-0                                              Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nprovisioning-4326                    3m26s       Normal    Created                      pod/csi-hostpath-provisioner-0                                              Created container csi-provisioner\nprovisioning-4326                    3m23s       Normal    Started                      pod/csi-hostpath-provisioner-0                                              Started container csi-provisioner\nprovisioning-4326                    2m36s       Normal    Killing                      pod/csi-hostpath-provisioner-0                                              Stopping container csi-provisioner\nprovisioning-4326                    3m35s       Warning   FailedCreate                 statefulset/csi-hostpath-provisioner                                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-4326                    3m34s       Normal    SuccessfulCreate             statefulset/csi-hostpath-provisioner                                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nprovisioning-4326                    3m26s       Normal    Pulled                       pod/csi-hostpath-resizer-0                                                  Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\nprovisioning-4326                    3m26s       Normal    Created                      pod/csi-hostpath-resizer-0                                                  Created container csi-resizer\nprovisioning-4326                    3m24s       Normal    Started                      pod/csi-hostpath-resizer-0                                                  Started container csi-resizer\nprovisioning-4326                    2m35s       Normal    Killing                      pod/csi-hostpath-resizer-0                                                  Stopping container csi-resizer\nprovisioning-4326                    3m35s       Warning   FailedCreate                 statefulset/csi-hostpath-resizer                                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-4326                    3m34s       Normal    SuccessfulCreate             statefulset/csi-hostpath-resizer                                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nprovisioning-4326                    3m27s       Normal    ExternalProvisioning         persistentvolumeclaim/csi-hostpathl854q                                     waiting for a volume to be created, either by external provisioner \"csi-hostpath-provisioning-4326\" or manually created by system administrator\nprovisioning-4326                    3m23s       Normal    Provisioning                 persistentvolumeclaim/csi-hostpathl854q                                     External provisioner is provisioning volume for claim \"provisioning-4326/csi-hostpathl854q\"\nprovisioning-4326                    3m22s       Normal    ProvisioningSucceeded        persistentvolumeclaim/csi-hostpathl854q                                     Successfully provisioned volume pvc-9eb194b5-cae5-42fd-9965-cf22123cfd4b\nprovisioning-4326                    3m32s       Normal    Pulled                       pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nprovisioning-4326                    3m31s       Normal    Created                      pod/csi-hostpathplugin-0                                                    Created container node-driver-registrar\nprovisioning-4326                    3m29s       Normal    Started                      pod/csi-hostpathplugin-0                                                    Started container node-driver-registrar\nprovisioning-4326                    3m29s       Normal    Pulled                       pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\" already present on machine\nprovisioning-4326                    3m29s       Normal    Created                      pod/csi-hostpathplugin-0                                                    Created container hostpath\nprovisioning-4326                    3m27s       Normal    Started                      pod/csi-hostpathplugin-0                                                    Started container hostpath\nprovisioning-4326                    3m27s       Normal    Pulled                       pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/livenessprobe:v1.1.0\" already present on machine\nprovisioning-4326                    3m27s       Normal    Created                      pod/csi-hostpathplugin-0                                                    Created container liveness-probe\nprovisioning-4326                    3m25s       Normal    Started                      pod/csi-hostpathplugin-0                                                    Started container liveness-probe\nprovisioning-4326                    2m37s       Normal    Killing                      pod/csi-hostpathplugin-0                                                    Stopping container node-driver-registrar\nprovisioning-4326                    2m37s       Normal    Killing                      pod/csi-hostpathplugin-0                                                    Stopping container liveness-probe\nprovisioning-4326                    2m37s       Normal    Killing                      pod/csi-hostpathplugin-0                                                    Stopping container hostpath\nprovisioning-4326                    2m35s       Warning   Unhealthy                    pod/csi-hostpathplugin-0                                                    Liveness probe failed: Get http://10.64.1.248:9898/healthz: dial tcp 10.64.1.248:9898: connect: connection refused\nprovisioning-4326                    3m37s       Normal    SuccessfulCreate             statefulset/csi-hostpathplugin                                              create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nprovisioning-4326                    3m29s       Normal    Pulled                       pod/csi-snapshotter-0                                                       Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nprovisioning-4326                    3m28s       Normal    Created                      pod/csi-snapshotter-0                                                       Created container csi-snapshotter\nprovisioning-4326                    3m26s       Normal    Started                      pod/csi-snapshotter-0                                                       Started container csi-snapshotter\nprovisioning-4326                    2m34s       Normal    Killing                      pod/csi-snapshotter-0                                                       Stopping container csi-snapshotter\nprovisioning-4326                    3m34s       Warning   FailedCreate                 statefulset/csi-snapshotter                                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter failed error: pods \"csi-snapshotter-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-4326                    3m34s       Normal    SuccessfulCreate             statefulset/csi-snapshotter                                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nprovisioning-4326                    3m18s       Normal    SuccessfulAttachVolume       pod/pod-subpath-test-dynamicpv-4gn6                                         AttachVolume.Attach succeeded for volume \"pvc-9eb194b5-cae5-42fd-9965-cf22123cfd4b\"\nprovisioning-4326                    3m7s        Normal    Pulled                       pod/pod-subpath-test-dynamicpv-4gn6                                         Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4326                    3m7s        Normal    Created                      pod/pod-subpath-test-dynamicpv-4gn6                                         Created container test-init-subpath-dynamicpv-4gn6\nprovisioning-4326                    3m5s        Normal    Started                      pod/pod-subpath-test-dynamicpv-4gn6                                         Started container test-init-subpath-dynamicpv-4gn6\nprovisioning-4326                    3m4s        Normal    Pulled                       pod/pod-subpath-test-dynamicpv-4gn6                                         Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4326                    3m3s        Normal    Created                      pod/pod-subpath-test-dynamicpv-4gn6                                         Created container test-container-subpath-dynamicpv-4gn6\nprovisioning-4326                    3m          Normal    Started                      pod/pod-subpath-test-dynamicpv-4gn6                                         Started container test-container-subpath-dynamicpv-4gn6\nprovisioning-4326                    3m          Normal    Pulled                       pod/pod-subpath-test-dynamicpv-4gn6                                         Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4326                    2m59s       Normal    Created                      pod/pod-subpath-test-dynamicpv-4gn6                                         Created container test-container-volume-dynamicpv-4gn6\nprovisioning-4326                    2m57s       Normal    Started                      pod/pod-subpath-test-dynamicpv-4gn6                                         Started container test-container-volume-dynamicpv-4gn6\nprovisioning-4468                    3m59s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-5wn8-npxgp                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-4468                    3m59s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-npxgp                          Created container agnhost\nprovisioning-4468                    3m59s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-npxgp                          Started container agnhost\nprovisioning-4468                    3m2s        Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-npxgp                          Stopping container agnhost\nprovisioning-4468                    3m36s       Warning   FailedMount                  pod/pod-subpath-test-preprovisionedpv-76gr                                  Unable to attach or mount volumes: unmounted volumes=[test-volume], unattached volumes=[liveness-probe-volume default-token-9q8qh test-volume]: error processing PVC provisioning-4468/pvc-x7467: failed to fetch PVC from API server: persistentvolumeclaims \"pvc-x7467\" is forbidden: User \"system:node:bootstrap-e2e-minion-group-5wn8\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"provisioning-4468\": no relationship found between node \"bootstrap-e2e-minion-group-5wn8\" and this object\nprovisioning-4468                    3m23s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-76gr                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-4468                    3m23s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-76gr                                  Created container init-volume-preprovisionedpv-76gr\nprovisioning-4468                    3m23s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-76gr                                  Started container init-volume-preprovisionedpv-76gr\nprovisioning-4468                    3m22s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-76gr                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4468                    3m22s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-76gr                                  Created container test-init-subpath-preprovisionedpv-76gr\nprovisioning-4468                    3m21s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-76gr                                  Started container test-init-subpath-preprovisionedpv-76gr\nprovisioning-4468                    3m21s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-76gr                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4468                    3m21s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-76gr                                  Created container test-container-subpath-preprovisionedpv-76gr\nprovisioning-4468                    3m20s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-76gr                                  Started container test-container-subpath-preprovisionedpv-76gr\nprovisioning-4468                    3m12s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-76gr                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4468                    3m12s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-76gr                                  Created container test-container-subpath-preprovisionedpv-76gr\nprovisioning-4468                    3m11s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-76gr                                  Started container test-container-subpath-preprovisionedpv-76gr\nprovisioning-4468                    3m55s       Warning   ProvisioningFailed           persistentvolumeclaim/pvc-x7467                                             storageclass.storage.k8s.io \"provisioning-4468\" not found\nprovisioning-5486                    4m54s       Warning   FailedMount                  pod/hostexec-bootstrap-e2e-minion-group-dwjn-s9dpf                          MountVolume.SetUp failed for volume \"default-token-5xzpq\" : failed to sync secret cache: timed out waiting for the condition\nprovisioning-5486                    4m53s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-dwjn-s9dpf                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-5486                    4m53s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-dwjn-s9dpf                          Created container agnhost\nprovisioning-5486                    4m53s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-dwjn-s9dpf                          Started container agnhost\nprovisioning-5486                    4m15s       Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-dwjn-s9dpf                          Stopping container agnhost\nprovisioning-5486                    4m30s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-7nmg                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-5486                    4m30s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-7nmg                                  Created container init-volume-preprovisionedpv-7nmg\nprovisioning-5486                    4m27s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-7nmg                                  Started container init-volume-preprovisionedpv-7nmg\nprovisioning-5486                    4m26s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-7nmg                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-5486                    4m25s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-7nmg                                  Created container test-init-subpath-preprovisionedpv-7nmg\nprovisioning-5486                    4m24s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-7nmg                                  Started container test-init-subpath-preprovisionedpv-7nmg\nprovisioning-5486                    4m23s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-7nmg                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-5486                    4m23s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-7nmg                                  Created container test-container-subpath-preprovisionedpv-7nmg\nprovisioning-5486                    4m23s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-7nmg                                  Started container test-container-subpath-preprovisionedpv-7nmg\nprovisioning-5486                    4m23s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-7nmg                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-5486                    4m23s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-7nmg                                  Created container test-container-volume-preprovisionedpv-7nmg\nprovisioning-5486                    4m22s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-7nmg                                  Started container test-container-volume-preprovisionedpv-7nmg\nprovisioning-5486                    4m47s       Warning   ProvisioningFailed           persistentvolumeclaim/pvc-zbff6                                             storageclass.storage.k8s.io \"provisioning-5486\" not found\nprovisioning-5663                    66s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-dwjn-5mtq8                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-5663                    66s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-dwjn-5mtq8                          Created container agnhost\nprovisioning-5663                    66s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-dwjn-5mtq8                          Started container agnhost\nprovisioning-5663                    36s         Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-dwjn-5mtq8                          Stopping container agnhost\nprovisioning-5663                    50s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-5r5x                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-5663                    50s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-5r5x                                  Created container init-volume-preprovisionedpv-5r5x\nprovisioning-5663                    49s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-5r5x                                  Started container init-volume-preprovisionedpv-5r5x\nprovisioning-5663                    48s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-5r5x                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-5663                    48s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-5r5x                                  Created container test-init-volume-preprovisionedpv-5r5x\nprovisioning-5663                    47s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-5r5x                                  Started container test-init-volume-preprovisionedpv-5r5x\nprovisioning-5663                    47s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-5r5x                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-5663                    46s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-5r5x                                  Created container test-container-subpath-preprovisionedpv-5r5x\nprovisioning-5663                    46s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-5r5x                                  Started container test-container-subpath-preprovisionedpv-5r5x\nprovisioning-5663                    57s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-lfbpz                                             storageclass.storage.k8s.io \"provisioning-5663\" not found\nprovisioning-567                     4m45s       Normal    Pulled                       pod/hostpath-symlink-prep-provisioning-567                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-567                     4m45s       Normal    Created                      pod/hostpath-symlink-prep-provisioning-567                                  Created container init-volume-provisioning-567\nprovisioning-567                     4m42s       Normal    Started                      pod/hostpath-symlink-prep-provisioning-567                                  Started container init-volume-provisioning-567\nprovisioning-567                     4m9s        Warning   FailedMount                  pod/hostpath-symlink-prep-provisioning-567                                  MountVolume.SetUp failed for volume \"default-token-sthvl\" : failed to sync secret cache: timed out waiting for the condition\nprovisioning-567                     4m5s        Normal    Pulled                       pod/hostpath-symlink-prep-provisioning-567                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-567                     4m5s        Normal    Created                      pod/hostpath-symlink-prep-provisioning-567                                  Created container init-volume-provisioning-567\nprovisioning-567                     4m4s        Normal    Started                      pod/hostpath-symlink-prep-provisioning-567                                  Started container init-volume-provisioning-567\nprovisioning-567                     4m26s       Warning   FailedMount                  pod/pod-subpath-test-inlinevolume-r944                                      MountVolume.SetUp failed for volume \"default-token-sthvl\" : failed to sync secret cache: timed out waiting for the condition\nprovisioning-567                     4m24s       Normal    Pulled                       pod/pod-subpath-test-inlinevolume-r944                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-567                     4m23s       Normal    Created                      pod/pod-subpath-test-inlinevolume-r944                                      Created container test-init-subpath-inlinevolume-r944\nprovisioning-567                     4m23s       Normal    Started                      pod/pod-subpath-test-inlinevolume-r944                                      Started container test-init-subpath-inlinevolume-r944\nprovisioning-567                     4m22s       Normal    Pulled                       pod/pod-subpath-test-inlinevolume-r944                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-567                     4m22s       Normal    Created                      pod/pod-subpath-test-inlinevolume-r944                                      Created container test-container-subpath-inlinevolume-r944\nprovisioning-567                     4m21s       Normal    Started                      pod/pod-subpath-test-inlinevolume-r944                                      Started container test-container-subpath-inlinevolume-r944\nprovisioning-567                     4m21s       Normal    Pulled                       pod/pod-subpath-test-inlinevolume-r944                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-567                     4m21s       Normal    Created                      pod/pod-subpath-test-inlinevolume-r944                                      Created container test-container-volume-inlinevolume-r944\nprovisioning-567                     4m21s       Normal    Started                      pod/pod-subpath-test-inlinevolume-r944                                      Started container test-container-volume-inlinevolume-r944\nprovisioning-597                     6m12s       Normal    Pulled                       pod/csi-hostpath-attacher-0                                                 Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\nprovisioning-597                     6m11s       Normal    Created                      pod/csi-hostpath-attacher-0                                                 Created container csi-attacher\nprovisioning-597                     6m9s        Normal    Started                      pod/csi-hostpath-attacher-0                                                 Started container csi-attacher\nprovisioning-597                     4m59s       Normal    Killing                      pod/csi-hostpath-attacher-0                                                 Stopping container csi-attacher\nprovisioning-597                     6m19s       Warning   FailedCreate                 statefulset/csi-hostpath-attacher                                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-597                     6m17s       Normal    SuccessfulCreate             statefulset/csi-hostpath-attacher                                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nprovisioning-597                     6m13s       Normal    Pulled                       pod/csi-hostpath-provisioner-0                                              Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nprovisioning-597                     6m12s       Normal    Created                      pod/csi-hostpath-provisioner-0                                              Created container csi-provisioner\nprovisioning-597                     6m10s       Normal    Started                      pod/csi-hostpath-provisioner-0                                              Started container csi-provisioner\nprovisioning-597                     4m59s       Normal    Killing                      pod/csi-hostpath-provisioner-0                                              Stopping container csi-provisioner\nprovisioning-597                     6m19s       Warning   FailedCreate                 statefulset/csi-hostpath-provisioner                                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-597                     6m18s       Normal    SuccessfulCreate             statefulset/csi-hostpath-provisioner                                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nprovisioning-597                     6m11s       Normal    Pulling                      pod/csi-hostpath-resizer-0                                                  Pulling image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\nprovisioning-597                     6m5s        Normal    Pulled                       pod/csi-hostpath-resizer-0                                                  Successfully pulled image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\nprovisioning-597                     6m4s        Normal    Created                      pod/csi-hostpath-resizer-0                                                  Created container csi-resizer\nprovisioning-597                     6m3s        Normal    Started                      pod/csi-hostpath-resizer-0                                                  Started container csi-resizer\nprovisioning-597                     4m56s       Normal    Killing                      pod/csi-hostpath-resizer-0                                                  Stopping container csi-resizer\nprovisioning-597                     6m19s       Warning   FailedCreate                 statefulset/csi-hostpath-resizer                                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-597                     6m18s       Normal    SuccessfulCreate             statefulset/csi-hostpath-resizer                                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nprovisioning-597                     6m12s       Normal    ExternalProvisioning         persistentvolumeclaim/csi-hostpathplldg                                     waiting for a volume to be created, either by external provisioner \"csi-hostpath-provisioning-597\" or manually created by system administrator\nprovisioning-597                     6m          Normal    Provisioning                 persistentvolumeclaim/csi-hostpathplldg                                     External provisioner is provisioning volume for claim \"provisioning-597/csi-hostpathplldg\"\nprovisioning-597                     5m59s       Normal    ProvisioningSucceeded        persistentvolumeclaim/csi-hostpathplldg                                     Successfully provisioned volume pvc-1da621b4-ea10-4643-b57c-5f137aff4986\nprovisioning-597                     6m15s       Normal    Pulled                       pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nprovisioning-597                     6m13s       Normal    Created                      pod/csi-hostpathplugin-0                                                    Created container node-driver-registrar\nprovisioning-597                     6m11s       Normal    Started                      pod/csi-hostpathplugin-0                                                    Started container node-driver-registrar\nprovisioning-597                     6m11s       Normal    Pulling                      pod/csi-hostpathplugin-0                                                    Pulling image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nprovisioning-597                     6m4s        Normal    Pulled                       pod/csi-hostpathplugin-0                                                    Successfully pulled image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nprovisioning-597                     6m4s        Normal    Created                      pod/csi-hostpathplugin-0                                                    Created container hostpath\nprovisioning-597                     6m          Normal    Started                      pod/csi-hostpathplugin-0                                                    Started container hostpath\nprovisioning-597                     6m          Normal    Pulling                      pod/csi-hostpathplugin-0                                                    Pulling image \"quay.io/k8scsi/livenessprobe:v1.1.0\"\nprovisioning-597                     5m58s       Normal    Pulled                       pod/csi-hostpathplugin-0                                                    Successfully pulled image \"quay.io/k8scsi/livenessprobe:v1.1.0\"\nprovisioning-597                     5m57s       Normal    Created                      pod/csi-hostpathplugin-0                                                    Created container liveness-probe\nprovisioning-597                     5m57s       Normal    Started                      pod/csi-hostpathplugin-0                                                    Started container liveness-probe\nprovisioning-597                     4m59s       Normal    Killing                      pod/csi-hostpathplugin-0                                                    Stopping container node-driver-registrar\nprovisioning-597                     4m59s       Normal    Killing                      pod/csi-hostpathplugin-0                                                    Stopping container liveness-probe\nprovisioning-597                     4m59s       Normal    Killing                      pod/csi-hostpathplugin-0                                                    Stopping container hostpath\nprovisioning-597                     4m57s       Warning   Unhealthy                    pod/csi-hostpathplugin-0                                                    Liveness probe failed: Get http://10.64.3.186:9898/healthz: dial tcp 10.64.3.186:9898: connect: connection refused\nprovisioning-597                     4m56s       Warning   FailedPreStopHook            pod/csi-hostpathplugin-0                                                    Exec lifecycle hook ([/bin/sh -c rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock]) for Container \"node-driver-registrar\" in Pod \"csi-hostpathplugin-0_provisioning-597(522d5525-c975-4917-a5f0-9b54476bda2e)\" failed - error: command '/bin/sh -c rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock' exited with 126: , message: \"OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused \\\"exec: \\\\\\\"/bin/sh\\\\\\\": stat /bin/sh: no such file or directory\\\": unknown\\r\\n\"\nprovisioning-597                     6m20s       Normal    SuccessfulCreate             statefulset/csi-hostpathplugin                                              create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nprovisioning-597                     6m11s       Normal    Pulling                      pod/csi-snapshotter-0                                                       Pulling image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\"\nprovisioning-597                     6m8s        Normal    Pulled                       pod/csi-snapshotter-0                                                       Successfully pulled image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\"\nprovisioning-597                     4m53s       Normal    Created                      pod/csi-snapshotter-0                                                       Created container csi-snapshotter\nprovisioning-597                     6m6s        Normal    Started                      pod/csi-snapshotter-0                                                       Started container csi-snapshotter\nprovisioning-597                     6m18s       Normal    SuccessfulCreate             statefulset/csi-snapshotter                                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nprovisioning-597                     5m57s       Normal    SuccessfulAttachVolume       pod/pod-subpath-test-dynamicpv-5ffs                                         AttachVolume.Attach succeeded for volume \"pvc-1da621b4-ea10-4643-b57c-5f137aff4986\"\nprovisioning-597                     5m38s       Normal    Pulled                       pod/pod-subpath-test-dynamicpv-5ffs                                         Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-597                     5m37s       Normal    Created                      pod/pod-subpath-test-dynamicpv-5ffs                                         Created container init-volume-dynamicpv-5ffs\nprovisioning-597                     5m36s       Normal    Started                      pod/pod-subpath-test-dynamicpv-5ffs                                         Started container init-volume-dynamicpv-5ffs\nprovisioning-597                     5m34s       Normal    Pulled                       pod/pod-subpath-test-dynamicpv-5ffs                                         Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-597                     5m34s       Normal    Created                      pod/pod-subpath-test-dynamicpv-5ffs                                         Created container test-init-volume-dynamicpv-5ffs\nprovisioning-597                     5m33s       Normal    Started                      pod/pod-subpath-test-dynamicpv-5ffs                                         Started container test-init-volume-dynamicpv-5ffs\nprovisioning-597                     5m33s       Normal    Pulled                       pod/pod-subpath-test-dynamicpv-5ffs                                         Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-597                     5m32s       Normal    Created                      pod/pod-subpath-test-dynamicpv-5ffs                                         Created container test-container-subpath-dynamicpv-5ffs\nprovisioning-597                     5m31s       Normal    Started                      pod/pod-subpath-test-dynamicpv-5ffs                                         Started container test-container-subpath-dynamicpv-5ffs\nprovisioning-6217                    6m18s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-7htw-ttsqt                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-6217                    6m18s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-7htw-ttsqt                          Created container agnhost\nprovisioning-6217                    6m16s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-7htw-ttsqt                          Started container agnhost\nprovisioning-6217                    6m5s        Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-dc9h                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-6217                    6m5s        Normal    Created                      pod/pod-subpath-test-preprovisionedpv-dc9h                                  Created container init-volume-preprovisionedpv-dc9h\nprovisioning-6217                    6m4s        Normal    Started                      pod/pod-subpath-test-preprovisionedpv-dc9h                                  Started container init-volume-preprovisionedpv-dc9h\nprovisioning-6217                    6m3s        Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-dc9h                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-6217                    6m3s        Normal    Created                      pod/pod-subpath-test-preprovisionedpv-dc9h                                  Created container test-container-subpath-preprovisionedpv-dc9h\nprovisioning-6217                    6m2s        Normal    Started                      pod/pod-subpath-test-preprovisionedpv-dc9h                                  Started container test-container-subpath-preprovisionedpv-dc9h\nprovisioning-6242                    101s        Normal    Scheduled                    pod/gluster-server                                                          Successfully assigned provisioning-6242/gluster-server to bootstrap-e2e-minion-group-7htw\nprovisioning-6242                    98s         Normal    Pulled                       pod/gluster-server                                                          Container image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\" already present on machine\nprovisioning-6242                    98s         Normal    Created                      pod/gluster-server                                                          Created container gluster-server\nprovisioning-6242                    96s         Normal    Started                      pod/gluster-server                                                          Started container gluster-server\nprovisioning-6242                    33s         Normal    Killing                      pod/gluster-server                                                          Stopping container gluster-server\nprovisioning-6242                    66s         Normal    Scheduled                    pod/pod-subpath-test-preprovisionedpv-jg6z                                  Successfully assigned provisioning-6242/pod-subpath-test-preprovisionedpv-jg6z to bootstrap-e2e-minion-group-7htw\nprovisioning-6242                    65s         Warning   FailedMount                  pod/pod-subpath-test-preprovisionedpv-jg6z                                  Unable to attach or mount volumes: unmounted volumes=[test-volume liveness-probe-volume default-token-5w8vt], unattached volumes=[test-volume liveness-probe-volume default-token-5w8vt]: error processing PVC provisioning-6242/pvc-bxqdn: failed to fetch PVC from API server: persistentvolumeclaims \"pvc-bxqdn\" is forbidden: User \"system:node:bootstrap-e2e-minion-group-7htw\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"provisioning-6242\": no relationship found between node \"bootstrap-e2e-minion-group-7htw\" and this object\nprovisioning-6242                    49s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-jg6z                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-6242                    49s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-jg6z                                  Created container init-volume-preprovisionedpv-jg6z\nprovisioning-6242                    48s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-jg6z                                  Started container init-volume-preprovisionedpv-jg6z\nprovisioning-6242                    46s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-jg6z                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-6242                    46s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-jg6z                                  Created container test-init-volume-preprovisionedpv-jg6z\nprovisioning-6242                    45s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-jg6z                                  Started container test-init-volume-preprovisionedpv-jg6z\nprovisioning-6242                    45s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-jg6z                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-6242                    45s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-jg6z                                  Created container test-container-subpath-preprovisionedpv-jg6z\nprovisioning-6242                    44s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-jg6z                                  Started container test-container-subpath-preprovisionedpv-jg6z\nprovisioning-6242                    82s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-bxqdn                                             storageclass.storage.k8s.io \"provisioning-6242\" not found\nprovisioning-639                     12s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-dwjn-v87hn                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-639                     12s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-dwjn-v87hn                          Created container agnhost\nprovisioning-639                     12s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-dwjn-v87hn                          Started container agnhost\nprovisioning-639                     4s          Warning   ProvisioningFailed           persistentvolumeclaim/pvc-ktkmb                                             storageclass.storage.k8s.io \"provisioning-639\" not found\nprovisioning-6750                    87s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-1s6w-twzmj                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-6750                    87s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-twzmj                          Created container agnhost\nprovisioning-6750                    87s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-twzmj                          Started container agnhost\nprovisioning-6750                    37s         Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-twzmj                          Stopping container agnhost\nprovisioning-6750                    62s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-r8xr                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-6750                    62s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-r8xr                                  Created container init-volume-preprovisionedpv-r8xr\nprovisioning-6750                    61s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-r8xr                                  Started container init-volume-preprovisionedpv-r8xr\nprovisioning-6750                    59s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-r8xr                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-6750                    58s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-r8xr                                  Created container test-container-subpath-preprovisionedpv-r8xr\nprovisioning-6750                    58s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-r8xr                                  Started container test-container-subpath-preprovisionedpv-r8xr\nprovisioning-6750                    75s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-crm7q                                             storageclass.storage.k8s.io \"provisioning-6750\" not found\nprovisioning-6820                    35s         Normal    Pulled                       pod/hostpath-symlink-prep-provisioning-6820                                 Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-6820                    34s         Normal    Created                      pod/hostpath-symlink-prep-provisioning-6820                                 Created container init-volume-provisioning-6820\nprovisioning-6820                    33s         Normal    Started                      pod/hostpath-symlink-prep-provisioning-6820                                 Started container init-volume-provisioning-6820\nprovisioning-6820                    19s         Normal    Pulled                       pod/hostpath-symlink-prep-provisioning-6820                                 Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-6820                    18s         Normal    Created                      pod/hostpath-symlink-prep-provisioning-6820                                 Created container init-volume-provisioning-6820\nprovisioning-6820                    17s         Normal    Started                      pod/hostpath-symlink-prep-provisioning-6820                                 Started container init-volume-provisioning-6820\nprovisioning-6820                    28s         Normal    Pulled                       pod/pod-subpath-test-inlinevolume-8nwl                                      Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-6820                    28s         Normal    Created                      pod/pod-subpath-test-inlinevolume-8nwl                                      Created container init-volume-inlinevolume-8nwl\nprovisioning-6820                    27s         Normal    Started                      pod/pod-subpath-test-inlinevolume-8nwl                                      Started container init-volume-inlinevolume-8nwl\nprovisioning-6820                    26s         Normal    Pulled                       pod/pod-subpath-test-inlinevolume-8nwl                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-6820                    26s         Normal    Created                      pod/pod-subpath-test-inlinevolume-8nwl                                      Created container test-container-subpath-inlinevolume-8nwl\nprovisioning-6820                    25s         Normal    Started                      pod/pod-subpath-test-inlinevolume-8nwl                                      Started container test-container-subpath-inlinevolume-8nwl\nprovisioning-6869                    5m          Normal    Scheduled                    pod/gluster-server                                                          Successfully assigned provisioning-6869/gluster-server to bootstrap-e2e-minion-group-5wn8\nprovisioning-6869                    4m58s       Normal    Pulling                      pod/gluster-server                                                          Pulling image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\"\nprovisioning-6869                    4m42s       Normal    Pulled                       pod/gluster-server                                                          Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\"\nprovisioning-6869                    4m42s       Normal    Created                      pod/gluster-server                                                          Created container gluster-server\nprovisioning-6869                    4m41s       Normal    Started                      pod/gluster-server                                                          Started container gluster-server\nprovisioning-6869                    4m2s        Normal    Killing                      pod/gluster-server                                                          Stopping container gluster-server\nprovisioning-6869                    4m37s       Normal    Scheduled                    pod/pod-subpath-test-inlinevolume-wrgn                                      Successfully assigned provisioning-6869/pod-subpath-test-inlinevolume-wrgn to bootstrap-e2e-minion-group-5wn8\nprovisioning-6869                    4m34s       Normal    Pulled                       pod/pod-subpath-test-inlinevolume-wrgn                                      Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-6869                    4m34s       Normal    Created                      pod/pod-subpath-test-inlinevolume-wrgn                                      Created container init-volume-inlinevolume-wrgn\nprovisioning-6869                    4m31s       Normal    Started                      pod/pod-subpath-test-inlinevolume-wrgn                                      Started container init-volume-inlinevolume-wrgn\nprovisioning-6869                    4m31s       Normal    Pulled                       pod/pod-subpath-test-inlinevolume-wrgn                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-6869                    4m30s       Normal    Created                      pod/pod-subpath-test-inlinevolume-wrgn                                      Created container test-init-subpath-inlinevolume-wrgn\nprovisioning-6869                    4m30s       Normal    Started                      pod/pod-subpath-test-inlinevolume-wrgn                                      Started container test-init-subpath-inlinevolume-wrgn\nprovisioning-6869                    4m29s       Normal    Pulled                       pod/pod-subpath-test-inlinevolume-wrgn                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-6869                    4m29s       Normal    Created                      pod/pod-subpath-test-inlinevolume-wrgn                                      Created container test-container-subpath-inlinevolume-wrgn\nprovisioning-6869                    4m28s       Normal    Started                      pod/pod-subpath-test-inlinevolume-wrgn                                      Started container test-container-subpath-inlinevolume-wrgn\nprovisioning-6869                    4m19s       Normal    Scheduled                    pod/pod-subpath-test-inlinevolume-wrgn                                      Successfully assigned provisioning-6869/pod-subpath-test-inlinevolume-wrgn to bootstrap-e2e-minion-group-7htw\nprovisioning-6869                    4m12s       Normal    Pulled                       pod/pod-subpath-test-inlinevolume-wrgn                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-6869                    4m12s       Normal    Created                      pod/pod-subpath-test-inlinevolume-wrgn                                      Created container test-container-subpath-inlinevolume-wrgn\nprovisioning-6869                    4m11s       Normal    Started                      pod/pod-subpath-test-inlinevolume-wrgn                                      Started container test-container-subpath-inlinevolume-wrgn\nprovisioning-7136                    2m8s        Normal    Scheduled                    pod/pod-subpath-test-inlinevolume-2gll                                      Successfully assigned provisioning-7136/pod-subpath-test-inlinevolume-2gll to bootstrap-e2e-minion-group-7htw\nprovisioning-7136                    2m6s        Normal    Pulled                       pod/pod-subpath-test-inlinevolume-2gll                                      Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-7136                    2m6s        Normal    Created                      pod/pod-subpath-test-inlinevolume-2gll                                      Created container init-volume-inlinevolume-2gll\nprovisioning-7136                    2m4s        Normal    Started                      pod/pod-subpath-test-inlinevolume-2gll                                      Started container init-volume-inlinevolume-2gll\nprovisioning-7136                    2m3s        Normal    Pulled                       pod/pod-subpath-test-inlinevolume-2gll                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-7136                    2m3s        Normal    Created                      pod/pod-subpath-test-inlinevolume-2gll                                      Created container test-container-subpath-inlinevolume-2gll\nprovisioning-7136                    2m1s        Normal    Started                      pod/pod-subpath-test-inlinevolume-2gll                                      Started container test-container-subpath-inlinevolume-2gll\nprovisioning-7144                    6m11s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-7htw-9ct8b                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-7144                    6m11s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-7htw-9ct8b                          Created container agnhost\nprovisioning-7144                    6m10s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-7htw-9ct8b                          Started container agnhost\nprovisioning-7144                    5m50s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-vnth                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-7144                    5m50s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-vnth                                  Created container init-volume-preprovisionedpv-vnth\nprovisioning-7144                    5m48s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-vnth                                  Started container init-volume-preprovisionedpv-vnth\nprovisioning-7144                    5m40s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-vnth                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-7144                    5m40s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-vnth                                  Created container test-init-volume-preprovisionedpv-vnth\nprovisioning-7144                    5m35s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-vnth                                  Started container test-init-volume-preprovisionedpv-vnth\nprovisioning-7144                    5m31s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-vnth                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-7144                    5m29s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-vnth                                  Created container test-container-subpath-preprovisionedpv-vnth\nprovisioning-7144                    5m22s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-vnth                                  Started container test-container-subpath-preprovisionedpv-vnth\nprovisioning-7144                    6m2s        Warning   ProvisioningFailed           persistentvolumeclaim/pvc-npkfq                                             storageclass.storage.k8s.io \"provisioning-7144\" not found\nprovisioning-74                      6m26s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-mh6cr                          Created container agnhost\nprovisioning-74                      6m25s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-mh6cr                          Started container agnhost\nprovisioning-74                      5m54s       Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-mh6cr                          Stopping container agnhost\nprovisioning-74                      6m7s        Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-55dq                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-74                      6m7s        Normal    Created                      pod/pod-subpath-test-preprovisionedpv-55dq                                  Created container init-volume-preprovisionedpv-55dq\nprovisioning-74                      6m6s        Normal    Started                      pod/pod-subpath-test-preprovisionedpv-55dq                                  Started container init-volume-preprovisionedpv-55dq\nprovisioning-74                      6m5s        Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-55dq                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-74                      6m5s        Normal    Created                      pod/pod-subpath-test-preprovisionedpv-55dq                                  Created container test-container-subpath-preprovisionedpv-55dq\nprovisioning-74                      6m4s        Normal    Started                      pod/pod-subpath-test-preprovisionedpv-55dq                                  Started container test-container-subpath-preprovisionedpv-55dq\nprovisioning-74                      6m15s       Warning   ProvisioningFailed           persistentvolumeclaim/pvc-lwkwh                                             storageclass.storage.k8s.io \"provisioning-74\" not found\nprovisioning-7655                    3m          Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-1s6w-7566z                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-7655                    3m          Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-7566z                          Created container agnhost\nprovisioning-7655                    2m58s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-7566z                          Started container agnhost\nprovisioning-7655                    2m19s       Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-7566z                          Stopping container agnhost\nprovisioning-7655                    2m35s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-dl9g                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-7655                    2m35s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-dl9g                                  Created container test-init-subpath-preprovisionedpv-dl9g\nprovisioning-7655                    2m34s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-dl9g                                  Started container test-init-subpath-preprovisionedpv-dl9g\nprovisioning-7655                    2m33s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-dl9g                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-7655                    2m32s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-dl9g                                  Created container test-container-subpath-preprovisionedpv-dl9g\nprovisioning-7655                    2m31s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-dl9g                                  Started container test-container-subpath-preprovisionedpv-dl9g\nprovisioning-7655                    2m31s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-dl9g                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-7655                    2m31s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-dl9g                                  Created container test-container-volume-preprovisionedpv-dl9g\nprovisioning-7655                    2m29s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-dl9g                                  Started container test-container-volume-preprovisionedpv-dl9g\nprovisioning-7655                    2m53s       Warning   ProvisioningFailed           persistentvolumeclaim/pvc-27klw                                             storageclass.storage.k8s.io \"provisioning-7655\" not found\nprovisioning-78                      2m34s       Normal    Scheduled                    pod/pod-subpath-test-inlinevolume-8tq4                                      Successfully assigned provisioning-78/pod-subpath-test-inlinevolume-8tq4 to bootstrap-e2e-minion-group-5wn8\nprovisioning-78                      2m27s       Normal    Pulled                       pod/pod-subpath-test-inlinevolume-8tq4                                      Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-78                      2m27s       Normal    Created                      pod/pod-subpath-test-inlinevolume-8tq4                                      Created container init-volume-inlinevolume-8tq4\nprovisioning-78                      2m25s       Normal    Started                      pod/pod-subpath-test-inlinevolume-8tq4                                      Started container init-volume-inlinevolume-8tq4\nprovisioning-78                      2m24s       Normal    Pulled                       pod/pod-subpath-test-inlinevolume-8tq4                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-78                      2m23s       Normal    Created                      pod/pod-subpath-test-inlinevolume-8tq4                                      Created container test-container-subpath-inlinevolume-8tq4\nprovisioning-78                      2m21s       Normal    Started                      pod/pod-subpath-test-inlinevolume-8tq4                                      Started container test-container-subpath-inlinevolume-8tq4\nprovisioning-8076                    3m22s       Normal    Scheduled                    pod/pod-subpath-test-inlinevolume-dtvs                                      Successfully assigned provisioning-8076/pod-subpath-test-inlinevolume-dtvs to bootstrap-e2e-minion-group-5wn8\nprovisioning-8076                    3m19s       Normal    Pulled                       pod/pod-subpath-test-inlinevolume-dtvs                                      Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-8076                    3m19s       Normal    Created                      pod/pod-subpath-test-inlinevolume-dtvs                                      Created container init-volume-inlinevolume-dtvs\nprovisioning-8076                    3m18s       Normal    Started                      pod/pod-subpath-test-inlinevolume-dtvs                                      Started container init-volume-inlinevolume-dtvs\nprovisioning-8076                    3m18s       Normal    Pulled                       pod/pod-subpath-test-inlinevolume-dtvs                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-8076                    3m18s       Normal    Created                      pod/pod-subpath-test-inlinevolume-dtvs                                      Created container test-init-subpath-inlinevolume-dtvs\nprovisioning-8076                    3m17s       Normal    Started                      pod/pod-subpath-test-inlinevolume-dtvs                                      Started container test-init-subpath-inlinevolume-dtvs\nprovisioning-8076                    3m16s       Normal    Pulled                       pod/pod-subpath-test-inlinevolume-dtvs                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-8076                    3m16s       Normal    Created                      pod/pod-subpath-test-inlinevolume-dtvs                                      Created container test-container-subpath-inlinevolume-dtvs\nprovisioning-8076                    3m15s       Normal    Started                      pod/pod-subpath-test-inlinevolume-dtvs                                      Started container test-container-subpath-inlinevolume-dtvs\nprovisioning-8076                    3m15s       Normal    Pulled                       pod/pod-subpath-test-inlinevolume-dtvs                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-8076                    3m15s       Normal    Created                      pod/pod-subpath-test-inlinevolume-dtvs                                      Created container test-container-volume-inlinevolume-dtvs\nprovisioning-8076                    3m15s       Normal    Started                      pod/pod-subpath-test-inlinevolume-dtvs                                      Started container test-container-volume-inlinevolume-dtvs\nprovisioning-8159                    22s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-5wn8-bkhwl                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-8159                    22s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-bkhwl                          Created container agnhost\nprovisioning-8159                    21s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-bkhwl                          Started container agnhost\nprovisioning-8159                    8s          Warning   FailedMount                  pod/pod-subpath-test-preprovisionedpv-t56v                                  Unable to attach or mount volumes: unmounted volumes=[test-volume liveness-probe-volume default-token-qmv9r], unattached volumes=[test-volume liveness-probe-volume default-token-qmv9r]: error processing PVC provisioning-8159/pvc-b42rh: failed to fetch PVC from API server: persistentvolumeclaims \"pvc-b42rh\" is forbidden: User \"system:node:bootstrap-e2e-minion-group-5wn8\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"provisioning-8159\": no relationship found between node \"bootstrap-e2e-minion-group-5wn8\" and this object\nprovisioning-8159                    17s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-b42rh                                             storageclass.storage.k8s.io \"provisioning-8159\" not found\nprovisioning-816                     5m23s       Warning   FailedMount                  pod/hostexec-bootstrap-e2e-minion-group-1s6w-jrn5t                          MountVolume.SetUp failed for volume \"default-token-q2kb9\" : failed to sync secret cache: timed out waiting for the condition\nprovisioning-816                     5m22s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-1s6w-jrn5t                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-816                     5m22s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-jrn5t                          Created container agnhost\nprovisioning-816                     5m22s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-jrn5t                          Started container agnhost\nprovisioning-816                     4m37s       Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-jrn5t                          Stopping container agnhost\nprovisioning-816                     5m3s        Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-w4ns                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-816                     5m3s        Normal    Created                      pod/pod-subpath-test-preprovisionedpv-w4ns                                  Created container init-volume-preprovisionedpv-w4ns\nprovisioning-816                     5m3s        Normal    Started                      pod/pod-subpath-test-preprovisionedpv-w4ns                                  Started container init-volume-preprovisionedpv-w4ns\nprovisioning-816                     5m3s        Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-w4ns                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-816                     5m2s        Normal    Created                      pod/pod-subpath-test-preprovisionedpv-w4ns                                  Created container test-init-subpath-preprovisionedpv-w4ns\nprovisioning-816                     5m1s        Normal    Started                      pod/pod-subpath-test-preprovisionedpv-w4ns                                  Started container test-init-subpath-preprovisionedpv-w4ns\nprovisioning-816                     5m          Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-w4ns                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-816                     5m          Normal    Created                      pod/pod-subpath-test-preprovisionedpv-w4ns                                  Created container test-container-subpath-preprovisionedpv-w4ns\nprovisioning-816                     4m57s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-w4ns                                  Started container test-container-subpath-preprovisionedpv-w4ns\nprovisioning-816                     4m57s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-w4ns                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-816                     4m57s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-w4ns                                  Created container test-container-volume-preprovisionedpv-w4ns\nprovisioning-816                     4m54s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-w4ns                                  Started container test-container-volume-preprovisionedpv-w4ns\nprovisioning-816                     5m17s       Warning   ProvisioningFailed           persistentvolumeclaim/pvc-gngzx                                             storageclass.storage.k8s.io \"provisioning-816\" not found\nprovisioning-8161                    4m47s       Normal    Scheduled                    pod/gluster-server                                                          Successfully assigned provisioning-8161/gluster-server to bootstrap-e2e-minion-group-7htw\nprovisioning-8161                    4m43s       Normal    Pulled                       pod/gluster-server                                                          Container image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\" already present on machine\nprovisioning-8161                    4m43s       Normal    Created                      pod/gluster-server                                                          Created container gluster-server\nprovisioning-8161                    4m41s       Normal    Started                      pod/gluster-server                                                          Started container gluster-server\nprovisioning-8161                    3m49s       Normal    Killing                      pod/gluster-server                                                          Stopping container gluster-server\nprovisioning-8161                    4m20s       Normal    Scheduled                    pod/pod-subpath-test-preprovisionedpv-wrbg                                  Successfully assigned provisioning-8161/pod-subpath-test-preprovisionedpv-wrbg to bootstrap-e2e-minion-group-7htw\nprovisioning-8161                    4m12s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-wrbg                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-8161                    4m12s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-wrbg                                  Created container init-volume-preprovisionedpv-wrbg\nprovisioning-8161                    4m10s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-wrbg                                  Started container init-volume-preprovisionedpv-wrbg\nprovisioning-8161                    4m9s        Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-wrbg                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-8161                    4m9s        Normal    Created                      pod/pod-subpath-test-preprovisionedpv-wrbg                                  Created container test-init-subpath-preprovisionedpv-wrbg\nprovisioning-8161                    4m8s        Normal    Started                      pod/pod-subpath-test-preprovisionedpv-wrbg                                  Started container test-init-subpath-preprovisionedpv-wrbg\nprovisioning-8161                    4m5s        Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-wrbg                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-8161                    4m5s        Normal    Created                      pod/pod-subpath-test-preprovisionedpv-wrbg                                  Created container test-container-subpath-preprovisionedpv-wrbg\nprovisioning-8161                    4m2s        Normal    Started                      pod/pod-subpath-test-preprovisionedpv-wrbg                                  Started container test-container-subpath-preprovisionedpv-wrbg\nprovisioning-8161                    4m2s        Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-wrbg                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-8161                    4m2s        Normal    Created                      pod/pod-subpath-test-preprovisionedpv-wrbg                                  Created container test-container-volume-preprovisionedpv-wrbg\nprovisioning-8161                    3m59s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-wrbg                                  Started container test-container-volume-preprovisionedpv-wrbg\nprovisioning-8161                    4m36s       Warning   ProvisioningFailed           persistentvolumeclaim/pvc-8x2dt                                             storageclass.storage.k8s.io \"provisioning-8161\" not found\nprovisioning-8464                    5m38s       Normal    WaitForFirstConsumer         persistentvolumeclaim/gcepdbwwpl                                            waiting for first consumer to be created before binding\nprovisioning-8464                    5m35s       Normal    ProvisioningSucceeded        persistentvolumeclaim/gcepdbwwpl                                            Successfully provisioned volume pvc-dea36588-7eb2-46b1-892d-b4f8bca65c41 using kubernetes.io/gce-pd\nprovisioning-8464                    5m34s       Normal    Scheduled                    pod/pod-subpath-test-dynamicpv-fw5j                                         Successfully assigned provisioning-8464/pod-subpath-test-dynamicpv-fw5j to bootstrap-e2e-minion-group-dwjn\nprovisioning-8464                    5m28s       Normal    SuccessfulAttachVolume       pod/pod-subpath-test-dynamicpv-fw5j                                         AttachVolume.Attach succeeded for volume \"pvc-dea36588-7eb2-46b1-892d-b4f8bca65c41\"\nprovisioning-8464                    5m14s       Normal    Pulled                       pod/pod-subpath-test-dynamicpv-fw5j                                         Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-8464                    5m14s       Normal    Created                      pod/pod-subpath-test-dynamicpv-fw5j                                         Created container init-volume-dynamicpv-fw5j\nprovisioning-8464                    5m14s       Normal    Started                      pod/pod-subpath-test-dynamicpv-fw5j                                         Started container init-volume-dynamicpv-fw5j\nprovisioning-8464                    5m13s       Normal    Pulled                       pod/pod-subpath-test-dynamicpv-fw5j                                         Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-8464                    5m13s       Normal    Created                      pod/pod-subpath-test-dynamicpv-fw5j                                         Created container test-init-subpath-dynamicpv-fw5j\nprovisioning-8464                    5m13s       Normal    Started                      pod/pod-subpath-test-dynamicpv-fw5j                                         Started container test-init-subpath-dynamicpv-fw5j\nprovisioning-8464                    5m12s       Normal    Pulled                       pod/pod-subpath-test-dynamicpv-fw5j                                         Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-8464                    5m12s       Normal    Created                      pod/pod-subpath-test-dynamicpv-fw5j                                         Created container test-container-subpath-dynamicpv-fw5j\nprovisioning-8464                    5m11s       Normal    Started                      pod/pod-subpath-test-dynamicpv-fw5j                                         Started container test-container-subpath-dynamicpv-fw5j\nprovisioning-8464                    5m11s       Normal    Pulled                       pod/pod-subpath-test-dynamicpv-fw5j                                         Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-8464                    5m11s       Normal    Created                      pod/pod-subpath-test-dynamicpv-fw5j                                         Created container test-container-volume-dynamicpv-fw5j\nprovisioning-8464                    5m11s       Normal    Started                      pod/pod-subpath-test-dynamicpv-fw5j                                         Started container test-container-volume-dynamicpv-fw5j\nprovisioning-8545                    2m24s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-dwjn-nsf52                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-8545                    2m24s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-dwjn-nsf52                          Created container agnhost\nprovisioning-8545                    2m23s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-dwjn-nsf52                          Started container agnhost\nprovisioning-8545                    105s        Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-dwjn-nsf52                          Stopping container agnhost\nprovisioning-8545                    2m3s        Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-tz4m                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-8545                    2m3s        Normal    Created                      pod/pod-subpath-test-preprovisionedpv-tz4m                                  Created container init-volume-preprovisionedpv-tz4m\nprovisioning-8545                    2m2s        Normal    Started                      pod/pod-subpath-test-preprovisionedpv-tz4m                                  Started container init-volume-preprovisionedpv-tz4m\nprovisioning-8545                    2m          Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-tz4m                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-8545                    2m          Normal    Created                      pod/pod-subpath-test-preprovisionedpv-tz4m                                  Created container test-init-volume-preprovisionedpv-tz4m\nprovisioning-8545                    2m          Normal    Started                      pod/pod-subpath-test-preprovisionedpv-tz4m                                  Started container test-init-volume-preprovisionedpv-tz4m\nprovisioning-8545                    119s        Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-tz4m                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-8545                    119s        Normal    Created                      pod/pod-subpath-test-preprovisionedpv-tz4m                                  Created container test-container-subpath-preprovisionedpv-tz4m\nprovisioning-8545                    119s        Normal    Started                      pod/pod-subpath-test-preprovisionedpv-tz4m                                  Started container test-container-subpath-preprovisionedpv-tz4m\nprovisioning-8545                    2m16s       Warning   ProvisioningFailed           persistentvolumeclaim/pvc-wt87w                                             storageclass.storage.k8s.io \"provisioning-8545\" not found\nprovisioning-8958                    15s         Normal    Scheduled                    pod/gluster-server                                                          Successfully assigned provisioning-8958/gluster-server to bootstrap-e2e-minion-group-7htw\nprovisioning-8958                    13s         Normal    Pulled                       pod/gluster-server                                                          Container image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\" already present on machine\nprovisioning-8958                    12s         Normal    Created                      pod/gluster-server                                                          Created container gluster-server\nprovisioning-8958                    11s         Normal    Started                      pod/gluster-server                                                          Started container gluster-server\nprovisioning-905                     2m50s       Normal    WaitForFirstConsumer         persistentvolumeclaim/gcepdt5qfp                                            waiting for first consumer to be created before binding\nprovisioning-905                     2m46s       Normal    ProvisioningSucceeded        persistentvolumeclaim/gcepdt5qfp                                            Successfully provisioned volume pvc-b9217635-4527-462e-8fbd-5552c4e3816d using kubernetes.io/gce-pd\nprovisioning-905                     2m44s       Normal    Scheduled                    pod/pod-subpath-test-dynamicpv-pzmr                                         Successfully assigned provisioning-905/pod-subpath-test-dynamicpv-pzmr to bootstrap-e2e-minion-group-5wn8\nprovisioning-905                     2m37s       Normal    SuccessfulAttachVolume       pod/pod-subpath-test-dynamicpv-pzmr                                         AttachVolume.Attach succeeded for volume \"pvc-b9217635-4527-462e-8fbd-5552c4e3816d\"\nprovisioning-905                     2m20s       Normal    Pulled                       pod/pod-subpath-test-dynamicpv-pzmr                                         Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-905                     2m20s       Normal    Created                      pod/pod-subpath-test-dynamicpv-pzmr                                         Created container init-volume-dynamicpv-pzmr\nprovisioning-905                     2m19s       Normal    Started                      pod/pod-subpath-test-dynamicpv-pzmr                                         Started container init-volume-dynamicpv-pzmr\nprovisioning-905                     2m18s       Normal    Pulled                       pod/pod-subpath-test-dynamicpv-pzmr                                         Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-905                     2m17s       Normal    Created                      pod/pod-subpath-test-dynamicpv-pzmr                                         Created container test-container-subpath-dynamicpv-pzmr\nprovisioning-905                     2m17s       Normal    Started                      pod/pod-subpath-test-dynamicpv-pzmr                                         Started container test-container-subpath-dynamicpv-pzmr\nprovisioning-9197                    36s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-5wn8-mrvhm                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-9197                    35s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-mrvhm                          Created container agnhost\nprovisioning-9197                    34s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-mrvhm                          Started container agnhost\nprovisioning-9197                    3s          Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-7ds9                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-9197                    3s          Normal    Created                      pod/pod-subpath-test-preprovisionedpv-7ds9                                  Created container test-container-subpath-preprovisionedpv-7ds9\nprovisioning-9197                    25s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-66t9j                                             storageclass.storage.k8s.io \"provisioning-9197\" not found\nprovisioning-9309                    3m8s        Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-5wn8-d2qxp                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-9309                    3m8s        Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-d2qxp                          Created container agnhost\nprovisioning-9309                    3m8s        Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-d2qxp                          Started container agnhost\nprovisioning-9309                    2m19s       Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-d2qxp                          Stopping container agnhost\nprovisioning-9309                    2m49s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-lvjd                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-9309                    2m49s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-lvjd                                  Created container init-volume-preprovisionedpv-lvjd\nprovisioning-9309                    2m48s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-lvjd                                  Started container init-volume-preprovisionedpv-lvjd\nprovisioning-9309                    2m48s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-lvjd                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-9309                    2m47s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-lvjd                                  Created container test-init-subpath-preprovisionedpv-lvjd\nprovisioning-9309                    2m44s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-lvjd                                  Started container test-init-subpath-preprovisionedpv-lvjd\nprovisioning-9309                    2m42s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-lvjd                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-9309                    2m41s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-lvjd                                  Created container test-container-subpath-preprovisionedpv-lvjd\nprovisioning-9309                    2m39s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-lvjd                                  Started container test-container-subpath-preprovisionedpv-lvjd\nprovisioning-9309                    2m39s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-lvjd                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-9309                    2m38s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-lvjd                                  Created container test-container-volume-preprovisionedpv-lvjd\nprovisioning-9309                    2m35s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-lvjd                                  Started container test-container-volume-preprovisionedpv-lvjd\nprovisioning-9309                    3m3s        Warning   ProvisioningFailed           persistentvolumeclaim/pvc-84h4f                                             storageclass.storage.k8s.io \"provisioning-9309\" not found\nprovisioning-9419                    64s         Warning   FailedMount                  pod/hostexec-bootstrap-e2e-minion-group-dwjn-tp9x7                          MountVolume.SetUp failed for volume \"default-token-67726\" : failed to sync secret cache: timed out waiting for the condition\nprovisioning-9419                    63s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-dwjn-tp9x7                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-9419                    63s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-dwjn-tp9x7                          Created container agnhost\nprovisioning-9419                    62s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-dwjn-tp9x7                          Started container agnhost\nprovisioning-9419                    22s         Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-dwjn-tp9x7                          Stopping container agnhost\nprovisioning-9419                    34s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-ffpc                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-9419                    34s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-ffpc                                  Created container test-container-subpath-preprovisionedpv-ffpc\nprovisioning-9419                    34s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-ffpc                                  Started container test-container-subpath-preprovisionedpv-ffpc\nprovisioning-9419                    34s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-ffpc                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-9419                    34s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-ffpc                                  Created container test-container-volume-preprovisionedpv-ffpc\nprovisioning-9419                    33s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-ffpc                                  Started container test-container-volume-preprovisionedpv-ffpc\nprovisioning-9419                    30s         Normal    Killing                      pod/pod-subpath-test-preprovisionedpv-ffpc                                  Stopping container test-container-volume-preprovisionedpv-ffpc\nprovisioning-9419                    54s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-t6jr2                                             storageclass.storage.k8s.io \"provisioning-9419\" not found\nprovisioning-9459                    27s         Normal    Scheduled                    pod/pod-subpath-test-inlinevolume-h5xp                                      Successfully assigned provisioning-9459/pod-subpath-test-inlinevolume-h5xp to bootstrap-e2e-minion-group-7htw\nprovisioning-9459                    22s         Normal    Pulled                       pod/pod-subpath-test-inlinevolume-h5xp                                      Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-9459                    22s         Normal    Created                      pod/pod-subpath-test-inlinevolume-h5xp                                      Created container init-volume-inlinevolume-h5xp\nprovisioning-9459                    19s         Normal    Started                      pod/pod-subpath-test-inlinevolume-h5xp                                      Started container init-volume-inlinevolume-h5xp\nprovisioning-9459                    18s         Normal    Pulled                       pod/pod-subpath-test-inlinevolume-h5xp                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-9459                    17s         Normal    Created                      pod/pod-subpath-test-inlinevolume-h5xp                                      Created container test-init-volume-inlinevolume-h5xp\nprovisioning-9459                    15s         Normal    Started                      pod/pod-subpath-test-inlinevolume-h5xp                                      Started container test-init-volume-inlinevolume-h5xp\nprovisioning-9459                    13s         Normal    Pulled                       pod/pod-subpath-test-inlinevolume-h5xp                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-9459                    13s         Normal    Created                      pod/pod-subpath-test-inlinevolume-h5xp                                      Created container test-container-subpath-inlinevolume-h5xp\nprovisioning-9459                    12s         Normal    Started                      pod/pod-subpath-test-inlinevolume-h5xp                                      Started container test-container-subpath-inlinevolume-h5xp\nprovisioning-9502                    4m54s       Normal    Pulled                       pod/hostpath-symlink-prep-provisioning-9502                                 Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-9502                    4m54s       Normal    Created                      pod/hostpath-symlink-prep-provisioning-9502                                 Created container init-volume-provisioning-9502\nprovisioning-9502                    4m53s       Normal    Started                      pod/hostpath-symlink-prep-provisioning-9502                                 Started container init-volume-provisioning-9502\nprovisioning-9502                    4m46s       Normal    Pulled                       pod/hostpath-symlink-prep-provisioning-9502                                 Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-9502                    4m46s       Normal    Created                      pod/hostpath-symlink-prep-provisioning-9502                                 Created container init-volume-provisioning-9502\nprovisioning-9502                    4m46s       Normal    Started                      pod/hostpath-symlink-prep-provisioning-9502                                 Started container init-volume-provisioning-9502\nprovisioning-9519                    97s         Warning   FailedMount                  pod/hostexec-bootstrap-e2e-minion-group-1s6w-7zn9h                          MountVolume.SetUp failed for volume \"default-token-cdmvd\" : failed to sync secret cache: timed out waiting for the condition\nprovisioning-9519                    95s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-1s6w-7zn9h                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-9519                    95s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-7zn9h                          Created container agnhost\nprovisioning-9519                    95s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-7zn9h                          Started container agnhost\nprovisioning-9519                    45s         Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-7zn9h                          Stopping container agnhost\nprovisioning-9519                    74s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-qnm5                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-9519                    73s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-qnm5                                  Created container test-init-subpath-preprovisionedpv-qnm5\nprovisioning-9519                    71s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-qnm5                                  Started container test-init-subpath-preprovisionedpv-qnm5\nprovisioning-9519                    70s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-qnm5                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-9519                    69s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-qnm5                                  Created container test-container-subpath-preprovisionedpv-qnm5\nprovisioning-9519                    67s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-qnm5                                  Started container test-container-subpath-preprovisionedpv-qnm5\nprovisioning-9519                    67s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-qnm5                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-9519                    67s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-qnm5                                  Created container test-container-volume-preprovisionedpv-qnm5\nprovisioning-9519                    65s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-qnm5                                  Started container test-container-volume-preprovisionedpv-qnm5\nprovisioning-9519                    89s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-ntb2v                                             storageclass.storage.k8s.io \"provisioning-9519\" not found\nprovisioning-9882                    3m59s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-1s6w-46sfp                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-9882                    3m59s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-46sfp                          Created container agnhost\nprovisioning-9882                    3m57s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-46sfp                          Started container agnhost\nprovisioning-9882                    2m54s       Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-46sfp                          Stopping container agnhost\nprovisioning-9882                    3m18s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-8gs6                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-9882                    3m18s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-8gs6                                  Created container init-volume-preprovisionedpv-8gs6\nprovisioning-9882                    3m15s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-8gs6                                  Started container init-volume-preprovisionedpv-8gs6\nprovisioning-9882                    3m12s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-8gs6                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-9882                    3m12s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-8gs6                                  Created container test-init-volume-preprovisionedpv-8gs6\nprovisioning-9882                    3m11s       Normal    Started                      pod/pod-subpath-test-preprovisionedpv-8gs6                                  Started container test-init-volume-preprovisionedpv-8gs6\nprovisioning-9882                    3m10s       Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-8gs6                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-9882                    3m10s       Normal    Created                      pod/pod-subpath-test-preprovisionedpv-8gs6                                  Created container test-container-subpath-preprovisionedpv-8gs6\nprovisioning-9882                    3m9s        Normal    Started                      pod/pod-subpath-test-preprovisionedpv-8gs6                                  Started container test-container-subpath-preprovisionedpv-8gs6\nprovisioning-9882                    3m35s       Warning   ProvisioningFailed           persistentvolumeclaim/pvc-jrw4c                                             storageclass.storage.k8s.io \"provisioning-9882\" not found\npv-7879                              2m32s       Normal    Scheduled                    pod/nfs-server                                                              Successfully assigned pv-7879/nfs-server to bootstrap-e2e-minion-group-1s6w\npv-7879                              2m29s       Normal    Pulled                       pod/nfs-server                                                              Container image \"gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0\" already present on machine\npv-7879                              2m29s       Normal    Created                      pod/nfs-server                                                              Created container nfs-server\npv-7879                              2m28s       Normal    Started                      pod/nfs-server                                                              Started container nfs-server\npv-7879                              62s         Normal    Killing                      pod/nfs-server                                                              Stopping container nfs-server\npv-7879                              2m20s       Normal    Scheduled                    pod/pvc-tester-jwgd6                                                        Successfully assigned pv-7879/pvc-tester-jwgd6 to bootstrap-e2e-minion-group-7htw\npv-7879                              2m16s       Normal    Pulled                       pod/pvc-tester-jwgd6                                                        Container image \"docker.io/library/busybox:1.29\" already present on machine\npv-7879                              2m16s       Normal    Created                      pod/pvc-tester-jwgd6                                                        Created container write-pod\npv-7879                              2m15s       Normal    Started                      pod/pvc-tester-jwgd6                                                        Started container write-pod\npv-7879                              96s         Normal    Scheduled                    pod/pvc-tester-z9lzz                                                        Successfully assigned pv-7879/pvc-tester-z9lzz to bootstrap-e2e-minion-group-7htw\npv-7879                              91s         Normal    Pulled                       pod/pvc-tester-z9lzz                                                        Container image \"docker.io/library/busybox:1.29\" already present on machine\npv-7879                              91s         Normal    Created                      pod/pvc-tester-z9lzz                                                        Created container write-pod\npv-7879                              91s         Normal    Started                      pod/pvc-tester-z9lzz                                                        Started container write-pod\npv-7879                              2m1s        Normal    Scheduled                    pod/pvc-tester-zt8mk                                                        Successfully assigned pv-7879/pvc-tester-zt8mk to bootstrap-e2e-minion-group-7htw\npv-7879                              2m1s        Warning   FailedMount                  pod/pvc-tester-zt8mk                                                        Unable to attach or mount volumes: unmounted volumes=[volume1 default-token-nlbfm], unattached volumes=[volume1 default-token-nlbfm]: error processing PVC pv-7879/pvc-qb22c: failed to fetch PVC from API server: persistentvolumeclaims \"pvc-qb22c\" is forbidden: User \"system:node:bootstrap-e2e-minion-group-7htw\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"pv-7879\": no relationship found between node \"bootstrap-e2e-minion-group-7htw\" and this object\npv-7879                              2m          Warning   FailedMount                  pod/pvc-tester-zt8mk                                                        MountVolume.SetUp failed for volume \"default-token-nlbfm\" : failed to sync secret cache: timed out waiting for the condition\npv-7879                              108s        Normal    Pulled                       pod/pvc-tester-zt8mk                                                        Container image \"docker.io/library/busybox:1.29\" already present on machine\npv-7879                              108s        Normal    Created                      pod/pvc-tester-zt8mk                                                        Created container write-pod\npv-7879                              106s        Normal    Started                      pod/pvc-tester-zt8mk                                                        Started container write-pod\npv-7879                              103s        Normal    SandboxChanged               pod/pvc-tester-zt8mk                                                        Pod sandbox changed, it will be killed and re-created.\npv-9230                              4m41s       Normal    Scheduled                    pod/pod-ephm-test-projected-lhxm                                            Successfully assigned pv-9230/pod-ephm-test-projected-lhxm to bootstrap-e2e-minion-group-5wn8\npv-9230                              4m9s        Warning   FailedMount                  pod/pod-ephm-test-projected-lhxm                                            MountVolume.SetUp failed for volume \"test-volume\" : secret \"secret-pod-ephm-test\" not found\npvc-protection-1491                  19s         Normal    ProvisioningSucceeded        persistentvolumeclaim/pvc-protectionhqwcz                                   Successfully provisioned volume pvc-344de9ed-b6d6-4b45-aa20-49af4de68ad9 using kubernetes.io/gce-pd\npvc-protection-1491                  20s         Warning   FailedScheduling             pod/pvc-tester-np4sb                                                        running \"VolumeBinding\" filter plugin for pod \"pvc-tester-np4sb\": pod has unbound immediate PersistentVolumeClaims\npvc-protection-1491                  17s         Normal    Scheduled                    pod/pvc-tester-np4sb                                                        Successfully assigned pvc-protection-1491/pvc-tester-np4sb to bootstrap-e2e-minion-group-7htw\npvc-protection-1491                  11s         Normal    SuccessfulAttachVolume       pod/pvc-tester-np4sb                                                        AttachVolume.Attach succeeded for volume \"pvc-344de9ed-b6d6-4b45-aa20-49af4de68ad9\"\npvc-protection-1491                  4s          Normal    Pulled                       pod/pvc-tester-np4sb                                                        Container image \"docker.io/library/busybox:1.29\" already present on machine\npvc-protection-1491                  4s          Normal    Created                      pod/pvc-tester-np4sb                                                        Created container write-pod\npvc-protection-1491                  4s          Normal    Started                      pod/pvc-tester-np4sb                                                        Started container write-pod\npvc-protection-8994                  118s        Normal    ProvisioningSucceeded        persistentvolumeclaim/pvc-protectionxhmnr                                   Successfully provisioned volume pvc-a04cae35-b30a-4eac-a043-282342b296db using kubernetes.io/gce-pd\npvc-protection-8994                  2m          Warning   FailedScheduling             pod/pvc-tester-99zhn                                                        running \"VolumeBinding\" filter plugin for pod \"pvc-tester-99zhn\": pod has unbound immediate PersistentVolumeClaims\npvc-protection-8994                  116s        Normal    Scheduled                    pod/pvc-tester-99zhn                                                        Successfully assigned pvc-protection-8994/pvc-tester-99zhn to bootstrap-e2e-minion-group-7htw\npvc-protection-8994                  110s        Normal    SuccessfulAttachVolume       pod/pvc-tester-99zhn                                                        AttachVolume.Attach succeeded for volume \"pvc-a04cae35-b30a-4eac-a043-282342b296db\"\npvc-protection-8994                  103s        Normal    Pulled                       pod/pvc-tester-99zhn                                                        Container image \"docker.io/library/busybox:1.29\" already present on machine\npvc-protection-8994                  103s        Normal    Created                      pod/pvc-tester-99zhn                                                        Created container write-pod\npvc-protection-8994                  103s        Normal    Started                      pod/pvc-tester-99zhn                                                        Started container write-pod\npvc-protection-8994                  90s         Normal    Killing                      pod/pvc-tester-99zhn                                                        Stopping container write-pod\nreplicaset-487                       5m18s       Normal    Scheduled                    pod/my-hostname-basic-0c98edc1-4d84-4773-a232-1a141e8f0eeb-dn5ds            Successfully assigned replicaset-487/my-hostname-basic-0c98edc1-4d84-4773-a232-1a141e8f0eeb-dn5ds to bootstrap-e2e-minion-group-5wn8\nreplicaset-487                       5m15s       Normal    Pulled                       pod/my-hostname-basic-0c98edc1-4d84-4773-a232-1a141e8f0eeb-dn5ds            Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nreplicaset-487                       5m15s       Normal    Created                      pod/my-hostname-basic-0c98edc1-4d84-4773-a232-1a141e8f0eeb-dn5ds            Created container my-hostname-basic-0c98edc1-4d84-4773-a232-1a141e8f0eeb\nreplicaset-487                       5m14s       Normal    Started                      pod/my-hostname-basic-0c98edc1-4d84-4773-a232-1a141e8f0eeb-dn5ds            Started container my-hostname-basic-0c98edc1-4d84-4773-a232-1a141e8f0eeb\nreplicaset-487                       5m18s       Normal    SuccessfulCreate             replicaset/my-hostname-basic-0c98edc1-4d84-4773-a232-1a141e8f0eeb           Created pod: my-hostname-basic-0c98edc1-4d84-4773-a232-1a141e8f0eeb-dn5ds\nreplicaset-9298                      4m35s       Normal    Scheduled                    pod/condition-test-fdz6d                                                    Successfully assigned replicaset-9298/condition-test-fdz6d to bootstrap-e2e-minion-group-1s6w\nreplicaset-9298                      4m33s       Normal    Pulled                       pod/condition-test-fdz6d                                                    Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nreplicaset-9298                      4m33s       Normal    Created                      pod/condition-test-fdz6d                                                    Created container httpd\nreplicaset-9298                      4m33s       Normal    Started                      pod/condition-test-fdz6d                                                    Started container httpd\nreplicaset-9298                      4m34s       Normal    Scheduled                    pod/condition-test-q2dvg                                                    Successfully assigned replicaset-9298/condition-test-q2dvg to bootstrap-e2e-minion-group-7htw\nreplicaset-9298                      4m35s       Normal    SuccessfulCreate             replicaset/condition-test                                                   Created pod: condition-test-fdz6d\nreplicaset-9298                      4m35s       Warning   FailedCreate                 replicaset/condition-test                                                   Error creating: pods \"condition-test-l52rk\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nreplicaset-9298                      4m35s       Normal    SuccessfulCreate             replicaset/condition-test                                                   Created pod: condition-test-q2dvg\nreplicaset-9298                      4m34s       Warning   FailedCreate                 replicaset/condition-test                                                   Error creating: pods \"condition-test-m8xhf\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nreplicaset-9298                      4m34s       Warning   FailedCreate                 replicaset/condition-test                                                   Error creating: pods \"condition-test-dx55l\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nreplicaset-9298                      4m34s       Warning   FailedCreate                 replicaset/condition-test                                                   Error creating: pods \"condition-test-pbj49\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nreplicaset-9298                      4m34s       Warning   FailedCreate                 replicaset/condition-test                                                   Error creating: pods \"condition-test-rjcmb\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nresourcequota-3837                   115s        Warning   ProvisioningFailed           persistentvolumeclaim/test-claim                                            storageclass.storage.k8s.io \"gold\" not found\nresourcequota-4366                   4m33s       Normal    Scheduled                    pod/terminating-pod                                                         Successfully assigned resourcequota-4366/terminating-pod to bootstrap-e2e-minion-group-5wn8\nresourcequota-4366                   4m31s       Normal    Pulled                       pod/terminating-pod                                                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nresourcequota-4366                   4m31s       Normal    Created                      pod/terminating-pod                                                         Created container pause\nresourcequota-4366                   4m30s       Normal    Started                      pod/terminating-pod                                                         Started container pause\nresourcequota-4366                   4m28s       Normal    Killing                      pod/terminating-pod                                                         Stopping container pause\nresourcequota-4366                   4m27s       Warning   FailedMount                  pod/terminating-pod                                                         MountVolume.SetUp failed for volume \"default-token-tzf58\" : object \"resourcequota-4366\"/\"default-token-tzf58\" not registered\nresourcequota-4366                   4m43s       Normal    Scheduled                    pod/test-pod                                                                Successfully assigned resourcequota-4366/test-pod to bootstrap-e2e-minion-group-dwjn\nresourcequota-4366                   4m42s       Normal    Pulled                       pod/test-pod                                                                Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nresourcequota-4366                   4m42s       Normal    Created                      pod/test-pod                                                                Created container pause\nresourcequota-4366                   4m42s       Normal    Started                      pod/test-pod                                                                Started container pause\nresourcequota-4366                   4m37s       Normal    Killing                      pod/test-pod                                                                Stopping container pause\nsecrets-2059                         2m59s       Normal    Scheduled                    pod/pod-secrets-fb9795d8-0099-4903-b617-38835cbc439e                        Successfully assigned secrets-2059/pod-secrets-fb9795d8-0099-4903-b617-38835cbc439e to bootstrap-e2e-minion-group-5wn8\nsecrets-2059                         2m56s       Normal    Pulled                       pod/pod-secrets-fb9795d8-0099-4903-b617-38835cbc439e                        Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nsecrets-2059                         2m55s       Normal    Created                      pod/pod-secrets-fb9795d8-0099-4903-b617-38835cbc439e                        Created container secret-volume-test\nsecrets-2059                         2m54s       Normal    Started                      pod/pod-secrets-fb9795d8-0099-4903-b617-38835cbc439e                        Started container secret-volume-test\nsecrets-6813                         6m2s        Normal    Scheduled                    pod/pod-secrets-b5e7ca21-fecc-4928-9cbe-5749778dab68                        Successfully assigned secrets-6813/pod-secrets-b5e7ca21-fecc-4928-9cbe-5749778dab68 to bootstrap-e2e-minion-group-1s6w\nsecrets-6813                         5m58s       Normal    Pulled                       pod/pod-secrets-b5e7ca21-fecc-4928-9cbe-5749778dab68                        Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nsecrets-6813                         5m57s       Normal    Created                      pod/pod-secrets-b5e7ca21-fecc-4928-9cbe-5749778dab68                        Created container secret-volume-test\nsecrets-6813                         5m56s       Normal    Started                      pod/pod-secrets-b5e7ca21-fecc-4928-9cbe-5749778dab68                        Started container secret-volume-test\nsecrets-7138                         4m52s       Normal    Scheduled                    pod/pod-secrets-767f5c32-2a96-4e11-8eab-7e3addd1736d                        Successfully assigned secrets-7138/pod-secrets-767f5c32-2a96-4e11-8eab-7e3addd1736d to bootstrap-e2e-minion-group-dwjn\nsecrets-7138                         4m50s       Normal    Pulled                       pod/pod-secrets-767f5c32-2a96-4e11-8eab-7e3addd1736d                        Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nsecrets-7138                         4m50s       Normal    Created                      pod/pod-secrets-767f5c32-2a96-4e11-8eab-7e3addd1736d                        Created container secret-volume-test\nsecrets-7138                         4m48s       Normal    Started                      pod/pod-secrets-767f5c32-2a96-4e11-8eab-7e3addd1736d                        Started container secret-volume-test\nsecrets-7486                         2m27s       Normal    Scheduled                    pod/pod-secrets-5ac05cdd-3a7c-45fc-8f84-66a9098ad603                        Successfully assigned secrets-7486/pod-secrets-5ac05cdd-3a7c-45fc-8f84-66a9098ad603 to bootstrap-e2e-minion-group-7htw\nsecrets-7486                         2m25s       Normal    Pulled                       pod/pod-secrets-5ac05cdd-3a7c-45fc-8f84-66a9098ad603                        Container image \"docker.io/library/busybox:1.29\" already present on machine\nsecrets-7486                         2m25s       Normal    Created                      pod/pod-secrets-5ac05cdd-3a7c-45fc-8f84-66a9098ad603                        Created container secret-env-test\nsecrets-7486                         2m25s       Normal    Started                      pod/pod-secrets-5ac05cdd-3a7c-45fc-8f84-66a9098ad603                        Started container secret-env-test\nsecrets-7491                         18s         Normal    Scheduled                    pod/pod-secrets-05af677a-7399-4bc3-83e3-26fbbca3b129                        Successfully assigned secrets-7491/pod-secrets-05af677a-7399-4bc3-83e3-26fbbca3b129 to bootstrap-e2e-minion-group-7htw\nsecrets-7491                         14s         Normal    Pulled                       pod/pod-secrets-05af677a-7399-4bc3-83e3-26fbbca3b129                        Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nsecrets-7491                         14s         Normal    Created                      pod/pod-secrets-05af677a-7399-4bc3-83e3-26fbbca3b129                        Created container secret-volume-test\nsecrets-7491                         11s         Normal    Started                      pod/pod-secrets-05af677a-7399-4bc3-83e3-26fbbca3b129                        Started container secret-volume-test\nsecrets-8621                         2m7s        Normal    Scheduled                    pod/pod-secrets-bf9df117-741f-4018-baf1-abb5f7ed6b65                        Successfully assigned secrets-8621/pod-secrets-bf9df117-741f-4018-baf1-abb5f7ed6b65 to bootstrap-e2e-minion-group-7htw\nsecrets-8621                         2m3s        Normal    Pulled                       pod/pod-secrets-bf9df117-741f-4018-baf1-abb5f7ed6b65                        Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nsecrets-8621                         2m3s        Normal    Created                      pod/pod-secrets-bf9df117-741f-4018-baf1-abb5f7ed6b65                        Created container secret-volume-test\nsecrets-8621                         2m1s        Normal    Started                      pod/pod-secrets-bf9df117-741f-4018-baf1-abb5f7ed6b65                        Started container secret-volume-test\nsecurity-context-test-1012           5m38s       Normal    Scheduled                    pod/busybox-privileged-true-070a4194-fb93-48ab-805d-06ad85c9eece            Successfully assigned security-context-test-1012/busybox-privileged-true-070a4194-fb93-48ab-805d-06ad85c9eece to bootstrap-e2e-minion-group-dwjn\nsecurity-context-test-1012           5m37s       Normal    Pulled                       pod/busybox-privileged-true-070a4194-fb93-48ab-805d-06ad85c9eece            Container image \"docker.io/library/busybox:1.29\" already present on machine\nsecurity-context-test-1012           5m37s       Normal    Created                      pod/busybox-privileged-true-070a4194-fb93-48ab-805d-06ad85c9eece            Created container busybox-privileged-true-070a4194-fb93-48ab-805d-06ad85c9eece\nsecurity-context-test-1012           5m37s       Normal    Started                      pod/busybox-privileged-true-070a4194-fb93-48ab-805d-06ad85c9eece            Started container busybox-privileged-true-070a4194-fb93-48ab-805d-06ad85c9eece\nsecurity-context-test-3483           4m39s       Normal    Scheduled                    pod/busybox-privileged-false-acb0f0c3-1d1b-4dce-acf6-c9eb8ba1dce1           Successfully assigned security-context-test-3483/busybox-privileged-false-acb0f0c3-1d1b-4dce-acf6-c9eb8ba1dce1 to bootstrap-e2e-minion-group-5wn8\nsecurity-context-test-3483           4m38s       Warning   FailedMount                  pod/busybox-privileged-false-acb0f0c3-1d1b-4dce-acf6-c9eb8ba1dce1           MountVolume.SetUp failed for volume \"default-token-7r2q2\" : failed to sync secret cache: timed out waiting for the condition\nsecurity-context-test-3483           4m37s       Normal    Pulled                       pod/busybox-privileged-false-acb0f0c3-1d1b-4dce-acf6-c9eb8ba1dce1           Container image \"docker.io/library/busybox:1.29\" already present on machine\nsecurity-context-test-3483           4m37s       Normal    Created                      pod/busybox-privileged-false-acb0f0c3-1d1b-4dce-acf6-c9eb8ba1dce1           Created container busybox-privileged-false-acb0f0c3-1d1b-4dce-acf6-c9eb8ba1dce1\nsecurity-context-test-3483           4m36s       Normal    Started                      pod/busybox-privileged-false-acb0f0c3-1d1b-4dce-acf6-c9eb8ba1dce1           Started container busybox-privileged-false-acb0f0c3-1d1b-4dce-acf6-c9eb8ba1dce1\nsecurity-context-test-4184           4m37s       Normal    Scheduled                    pod/alpine-nnp-nil-3e7a3a06-d14f-4354-a7c3-fb54458970d0                     Successfully assigned security-context-test-4184/alpine-nnp-nil-3e7a3a06-d14f-4354-a7c3-fb54458970d0 to bootstrap-e2e-minion-group-dwjn\nsecurity-context-test-4184           4m30s       Normal    Pulling                      pod/alpine-nnp-nil-3e7a3a06-d14f-4354-a7c3-fb54458970d0                     Pulling image \"gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0\"\nsecurity-context-test-4184           4m27s       Normal    Pulled                       pod/alpine-nnp-nil-3e7a3a06-d14f-4354-a7c3-fb54458970d0                     Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0\"\nsecurity-context-test-4184           4m27s       Normal    Created                      pod/alpine-nnp-nil-3e7a3a06-d14f-4354-a7c3-fb54458970d0                     Created container alpine-nnp-nil-3e7a3a06-d14f-4354-a7c3-fb54458970d0\nsecurity-context-test-4184           4m26s       Normal    Started                      pod/alpine-nnp-nil-3e7a3a06-d14f-4354-a7c3-fb54458970d0                     Started container alpine-nnp-nil-3e7a3a06-d14f-4354-a7c3-fb54458970d0\nsecurity-context-test-7109           48s         Normal    Scheduled                    pod/implicit-root-uid                                                       Successfully assigned security-context-test-7109/implicit-root-uid to bootstrap-e2e-minion-group-7htw\nsecurity-context-test-7109           47s         Warning   FailedMount                  pod/implicit-root-uid                                                       MountVolume.SetUp failed for volume \"default-token-ckxcz\" : failed to sync secret cache: timed out waiting for the condition\nsecurity-context-test-7109           13s         Normal    Pulled                       pod/implicit-root-uid                                                       Container image \"docker.io/library/busybox:1.29\" already present on machine\nsecurity-context-test-7109           13s         Warning   Failed                       pod/implicit-root-uid                                                       Error: container has runAsNonRoot and image will run as root\nsecurity-context-test-8368           44s         Normal    Scheduled                    pod/alpine-nnp-false-4756ecc9-5d21-46eb-913a-65cd6ce6be10                   Successfully assigned security-context-test-8368/alpine-nnp-false-4756ecc9-5d21-46eb-913a-65cd6ce6be10 to bootstrap-e2e-minion-group-7htw\nsecurity-context-test-8368           39s         Normal    Pulling                      pod/alpine-nnp-false-4756ecc9-5d21-46eb-913a-65cd6ce6be10                   Pulling image \"gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0\"\nsecurity-context-test-8368           36s         Normal    Pulled                       pod/alpine-nnp-false-4756ecc9-5d21-46eb-913a-65cd6ce6be10                   Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0\"\nsecurity-context-test-8368           36s         Normal    Created                      pod/alpine-nnp-false-4756ecc9-5d21-46eb-913a-65cd6ce6be10                   Created container alpine-nnp-false-4756ecc9-5d21-46eb-913a-65cd6ce6be10\nsecurity-context-test-8368           33s         Normal    Started                      pod/alpine-nnp-false-4756ecc9-5d21-46eb-913a-65cd6ce6be10                   Started container alpine-nnp-false-4756ecc9-5d21-46eb-913a-65cd6ce6be10\nservices-1811                        3m22s       Normal    Scheduled                    pod/execpod-cjkdx                                                           Successfully assigned services-1811/execpod-cjkdx to bootstrap-e2e-minion-group-dwjn\nservices-1811                        3m20s       Normal    Pulled                       pod/execpod-cjkdx                                                           Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-1811                        3m20s       Normal    Created                      pod/execpod-cjkdx                                                           Created container agnhost-pause\nservices-1811                        3m20s       Normal    Started                      pod/execpod-cjkdx                                                           Started container agnhost-pause\nservices-1811                        3m10s       Normal    Killing                      pod/execpod-cjkdx                                                           Stopping container agnhost-pause\nservices-1811                        2m57s       Normal    Scheduled                    pod/execpod-fcvsz                                                           Successfully assigned services-1811/execpod-fcvsz to bootstrap-e2e-minion-group-7htw\nservices-1811                        2m50s       Normal    Pulled                       pod/execpod-fcvsz                                                           Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-1811                        2m50s       Normal    Created                      pod/execpod-fcvsz                                                           Created container agnhost-pause\nservices-1811                        2m48s       Normal    Started                      pod/execpod-fcvsz                                                           Started container agnhost-pause\nservices-1811                        2m31s       Normal    Killing                      pod/execpod-fcvsz                                                           Stopping container agnhost-pause\nservices-1811                        3m57s       Normal    Scheduled                    pod/service-headless-9h5hk                                                  Successfully assigned services-1811/service-headless-9h5hk to bootstrap-e2e-minion-group-1s6w\nservices-1811                        3m51s       Normal    Pulled                       pod/service-headless-9h5hk                                                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-1811                        3m51s       Normal    Created                      pod/service-headless-9h5hk                                                  Created container service-headless\nservices-1811                        3m49s       Normal    Started                      pod/service-headless-9h5hk                                                  Started container service-headless\nservices-1811                        3m57s       Normal    Scheduled                    pod/service-headless-pxh44                                                  Successfully assigned services-1811/service-headless-pxh44 to bootstrap-e2e-minion-group-dwjn\nservices-1811                        3m55s       Normal    Pulled                       pod/service-headless-pxh44                                                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-1811                        3m55s       Normal    Created                      pod/service-headless-pxh44                                                  Created container service-headless\nservices-1811                        3m55s       Normal    Started                      pod/service-headless-pxh44                                                  Started container service-headless\nservices-1811                        3m58s       Normal    Scheduled                    pod/service-headless-q674q                                                  Successfully assigned services-1811/service-headless-q674q to bootstrap-e2e-minion-group-5wn8\nservices-1811                        3m57s       Normal    Pulled                       pod/service-headless-q674q                                                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-1811                        3m57s       Normal    Created                      pod/service-headless-q674q                                                  Created container service-headless\nservices-1811                        3m56s       Normal    Started                      pod/service-headless-q674q                                                  Started container service-headless\nservices-1811                        3m35s       Normal    Scheduled                    pod/service-headless-toggled-d6qx6                                          Successfully assigned services-1811/service-headless-toggled-d6qx6 to bootstrap-e2e-minion-group-dwjn\nservices-1811                        3m34s       Normal    Pulled                       pod/service-headless-toggled-d6qx6                                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-1811                        3m34s       Normal    Created                      pod/service-headless-toggled-d6qx6                                          Created container service-headless-toggled\nservices-1811                        3m34s       Normal    Started                      pod/service-headless-toggled-d6qx6                                          Started container service-headless-toggled\nservices-1811                        3m35s       Normal    Scheduled                    pod/service-headless-toggled-dzs5q                                          Successfully assigned services-1811/service-headless-toggled-dzs5q to bootstrap-e2e-minion-group-1s6w\nservices-1811                        3m31s       Normal    Pulled                       pod/service-headless-toggled-dzs5q                                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-1811                        3m31s       Normal    Created                      pod/service-headless-toggled-dzs5q                                          Created container service-headless-toggled\nservices-1811                        3m29s       Normal    Started                      pod/service-headless-toggled-dzs5q                                          Started container service-headless-toggled\nservices-1811                        3m35s       Normal    Scheduled                    pod/service-headless-toggled-qlk4b                                          Successfully assigned services-1811/service-headless-toggled-qlk4b to bootstrap-e2e-minion-group-7htw\nservices-1811                        3m29s       Normal    Pulled                       pod/service-headless-toggled-qlk4b                                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-1811                        3m29s       Normal    Created                      pod/service-headless-toggled-qlk4b                                          Created container service-headless-toggled\nservices-1811                        3m27s       Normal    Started                      pod/service-headless-toggled-qlk4b                                          Started container service-headless-toggled\nservices-1811                        3m36s       Normal    SuccessfulCreate             replicationcontroller/service-headless-toggled                              Created pod: service-headless-toggled-d6qx6\nservices-1811                        3m35s       Normal    SuccessfulCreate             replicationcontroller/service-headless-toggled                              Created pod: service-headless-toggled-qlk4b\nservices-1811                        3m35s       Normal    SuccessfulCreate             replicationcontroller/service-headless-toggled                              Created pod: service-headless-toggled-dzs5q\nservices-1811                        3m58s       Normal    SuccessfulCreate             replicationcontroller/service-headless                                      Created pod: service-headless-q674q\nservices-1811                        3m58s       Normal    SuccessfulCreate             replicationcontroller/service-headless                                      Created pod: service-headless-pxh44\nservices-1811                        3m58s       Normal    SuccessfulCreate             replicationcontroller/service-headless                                      Created pod: service-headless-9h5hk\nservices-2632                        4m17s       Normal    Scheduled                    pod/execpod-zcv27                                                           Successfully assigned services-2632/execpod-zcv27 to bootstrap-e2e-minion-group-1s6w\nservices-2632                        4m15s       Normal    Pulled                       pod/execpod-zcv27                                                           Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-2632                        4m14s       Normal    Created                      pod/execpod-zcv27                                                           Created container agnhost-pause\nservices-2632                        4m14s       Normal    Started                      pod/execpod-zcv27                                                           Started container agnhost-pause\nservices-2632                        4m36s       Normal    Scheduled                    pod/slow-terminating-unready-pod-q545q                                      Successfully assigned services-2632/slow-terminating-unready-pod-q545q to bootstrap-e2e-minion-group-1s6w\nservices-2632                        4m33s       Normal    Pulled                       pod/slow-terminating-unready-pod-q545q                                      Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-2632                        4m33s       Normal    Created                      pod/slow-terminating-unready-pod-q545q                                      Created container slow-terminating-unready-pod\nservices-2632                        4m32s       Normal    Started                      pod/slow-terminating-unready-pod-q545q                                      Started container slow-terminating-unready-pod\nservices-2632                        3m21s       Warning   Unhealthy                    pod/slow-terminating-unready-pod-q545q                                      Readiness probe failed:\nservices-2632                        3m17s       Normal    Killing                      pod/slow-terminating-unready-pod-q545q                                      Stopping container slow-terminating-unready-pod\nservices-2632                        4m36s       Normal    SuccessfulCreate             replicationcontroller/slow-terminating-unready-pod                          Created pod: slow-terminating-unready-pod-q545q\nservices-2632                        3m47s       Normal    SuccessfulDelete             replicationcontroller/slow-terminating-unready-pod                          Deleted pod: slow-terminating-unready-pod-q545q\nservices-6510                        3m5s        Normal    Scheduled                    pod/hairpin                                                                 Successfully assigned services-6510/hairpin to bootstrap-e2e-minion-group-5wn8\nservices-6510                        3m3s        Warning   FailedMount                  pod/hairpin                                                                 MountVolume.SetUp failed for volume \"default-token-7l9jp\" : failed to sync secret cache: timed out waiting for the condition\nservices-6510                        3m          Normal    Pulled                       pod/hairpin                                                                 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-6510                        3m          Normal    Created                      pod/hairpin                                                                 Created container agnhost\nservices-6510                        2m57s       Normal    Started                      pod/hairpin                                                                 Started container agnhost\nservices-7709                        2m43s       Normal    Scheduled                    pod/execpod-6bh67                                                           Successfully assigned services-7709/execpod-6bh67 to bootstrap-e2e-minion-group-7htw\nservices-7709                        2m38s       Normal    Pulled                       pod/execpod-6bh67                                                           Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-7709                        2m38s       Normal    Created                      pod/execpod-6bh67                                                           Created container agnhost-pause\nservices-7709                        2m35s       Normal    Started                      pod/execpod-6bh67                                                           Started container agnhost-pause\nservices-7709                        2m21s       Normal    Killing                      pod/execpod-6bh67                                                           Stopping container agnhost-pause\nservices-7709                        4m29s       Normal    Scheduled                    pod/execpod-gpj79                                                           Successfully assigned services-7709/execpod-gpj79 to bootstrap-e2e-minion-group-1s6w\nservices-7709                        4m27s       Normal    Pulled                       pod/execpod-gpj79                                                           Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-7709                        4m27s       Normal    Created                      pod/execpod-gpj79                                                           Created container agnhost-pause\nservices-7709                        4m26s       Normal    Started                      pod/execpod-gpj79                                                           Started container agnhost-pause\nservices-7709                        4m10s       Normal    Killing                      pod/execpod-gpj79                                                           Stopping container agnhost-pause\nservices-7709                        3m43s       Normal    Scheduled                    pod/execpod-gsl9m                                                           Successfully assigned services-7709/execpod-gsl9m to bootstrap-e2e-minion-group-1s6w\nservices-7709                        3m38s       Normal    Pulled                       pod/execpod-gsl9m                                                           Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-7709                        3m38s       Normal    Created                      pod/execpod-gsl9m                                                           Created container agnhost-pause\nservices-7709                        3m36s       Normal    Started                      pod/execpod-gsl9m                                                           Started container agnhost-pause\nservices-7709                        3m19s       Normal    Killing                      pod/execpod-gsl9m                                                           Stopping container agnhost-pause\nservices-7709                        3m3s        Normal    Scheduled                    pod/execpod-mc422                                                           Successfully assigned services-7709/execpod-mc422 to bootstrap-e2e-minion-group-5wn8\nservices-7709                        3m          Normal    Pulled                       pod/execpod-mc422                                                           Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-7709                        3m          Normal    Created                      pod/execpod-mc422                                                           Created container agnhost-pause\nservices-7709                        2m57s       Normal    Started                      pod/execpod-mc422                                                           Started container agnhost-pause\nservices-7709                        2m43s       Normal    Killing                      pod/execpod-mc422                                                           Stopping container agnhost-pause\nservices-7709                        4m50s       Normal    Scheduled                    pod/execpod-mmglw                                                           Successfully assigned services-7709/execpod-mmglw to bootstrap-e2e-minion-group-dwjn\nservices-7709                        4m47s       Normal    Pulled                       pod/execpod-mmglw                                                           Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-7709                        4m47s       Normal    Created                      pod/execpod-mmglw                                                           Created container agnhost-pause\nservices-7709                        4m46s       Normal    Started                      pod/execpod-mmglw                                                           Started container agnhost-pause\nservices-7709                        4m29s       Normal    Killing                      pod/execpod-mmglw                                                           Stopping container agnhost-pause\nservices-7709                        5m39s       Normal    Scheduled                    pod/up-down-1-9587j                                                         Successfully assigned services-7709/up-down-1-9587j to bootstrap-e2e-minion-group-5wn8\nservices-7709                        5m36s       Normal    Pulled                       pod/up-down-1-9587j                                                         Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-7709                        5m36s       Normal    Created                      pod/up-down-1-9587j                                                         Created container up-down-1\nservices-7709                        5m36s       Normal    Started                      pod/up-down-1-9587j                                                         Started container up-down-1\nservices-7709                        3m57s       Normal    Killing                      pod/up-down-1-9587j                                                         Stopping container up-down-1\nservices-7709                        5m39s       Normal    Scheduled                    pod/up-down-1-k8kk6                                                         Successfully assigned services-7709/up-down-1-k8kk6 to bootstrap-e2e-minion-group-1s6w\nservices-7709                        5m36s       Normal    Pulled                       pod/up-down-1-k8kk6                                                         Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-7709                        5m35s       Normal    Created                      pod/up-down-1-k8kk6                                                         Created container up-down-1\nservices-7709                        5m34s       Normal    Started                      pod/up-down-1-k8kk6                                                         Started container up-down-1\nservices-7709                        3m57s       Normal    Killing                      pod/up-down-1-k8kk6                                                         Stopping container up-down-1\nservices-7709                        5m39s       Normal    Scheduled                    pod/up-down-1-r4lhl                                                         Successfully assigned services-7709/up-down-1-r4lhl to bootstrap-e2e-minion-group-7htw\nservices-7709                        5m24s       Normal    Pulled                       pod/up-down-1-r4lhl                                                         Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-7709                        5m24s       Normal    Created                      pod/up-down-1-r4lhl                                                         Created container up-down-1\nservices-7709                        5m19s       Normal    Started                      pod/up-down-1-r4lhl                                                         Started container up-down-1\nservices-7709                        3m57s       Normal    Killing                      pod/up-down-1-r4lhl                                                         Stopping container up-down-1\nservices-7709                        5m39s       Normal    SuccessfulCreate             replicationcontroller/up-down-1                                             Created pod: up-down-1-k8kk6\nservices-7709                        5m39s       Normal    SuccessfulCreate             replicationcontroller/up-down-1                                             Created pod: up-down-1-r4lhl\nservices-7709                        5m39s       Normal    SuccessfulCreate             replicationcontroller/up-down-1                                             Created pod: up-down-1-9587j\nservices-7709                        3m57s       Warning   FailedToUpdateEndpoint       endpoints/up-down-1                                                         Failed to update endpoint services-7709/up-down-1: Operation cannot be fulfilled on endpoints \"up-down-1\": the object has been modified; please apply your changes to the latest version and try again\nservices-7709                        5m14s       Normal    Scheduled                    pod/up-down-2-96dt7                                                         Successfully assigned services-7709/up-down-2-96dt7 to bootstrap-e2e-minion-group-dwjn\nservices-7709                        5m11s       Normal    Pulled                       pod/up-down-2-96dt7                                                         Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-7709                        5m11s       Normal    Created                      pod/up-down-2-96dt7                                                         Created container up-down-2\nservices-7709                        5m10s       Normal    Started                      pod/up-down-2-96dt7                                                         Started container up-down-2\nservices-7709                        5m13s       Normal    Scheduled                    pod/up-down-2-k8rql                                                         Successfully assigned services-7709/up-down-2-k8rql to bootstrap-e2e-minion-group-7htw\nservices-7709                        5m4s        Normal    Pulled                       pod/up-down-2-k8rql                                                         Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-7709                        5m3s        Normal    Created                      pod/up-down-2-k8rql                                                         Created container up-down-2\nservices-7709                        5m          Normal    Started                      pod/up-down-2-k8rql                                                         Started container up-down-2\nservices-7709                        5m13s       Normal    Scheduled                    pod/up-down-2-vxsqr                                                         Successfully assigned services-7709/up-down-2-vxsqr to bootstrap-e2e-minion-group-5wn8\nservices-7709                        5m12s       Normal    Pulled                       pod/up-down-2-vxsqr                                                         Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-7709                        5m11s       Normal    Created                      pod/up-down-2-vxsqr                                                         Created container up-down-2\nservices-7709                        5m11s       Normal    Started                      pod/up-down-2-vxsqr                                                         Started container up-down-2\nservices-7709                        5m14s       Normal    SuccessfulCreate             replicationcontroller/up-down-2                                             Created pod: up-down-2-96dt7\nservices-7709                        5m14s       Normal    SuccessfulCreate             replicationcontroller/up-down-2                                             Created pod: up-down-2-k8rql\nservices-7709                        5m14s       Normal    SuccessfulCreate             replicationcontroller/up-down-2                                             Created pod: up-down-2-vxsqr\nservices-7709                        5m9s        Warning   FailedToUpdateEndpoint       endpoints/up-down-2                                                         Failed to update endpoint services-7709/up-down-2: Operation cannot be fulfilled on endpoints \"up-down-2\": the object has been modified; please apply your changes to the latest version and try again\nservices-7709                        3m19s       Normal    Scheduled                    pod/up-down-3-2xfxs                                                         Successfully assigned services-7709/up-down-3-2xfxs to bootstrap-e2e-minion-group-7htw\nservices-7709                        3m14s       Normal    Pulled                       pod/up-down-3-2xfxs                                                         Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-7709                        3m14s       Normal    Created                      pod/up-down-3-2xfxs                                                         Created container up-down-3\nservices-7709                        3m11s       Normal    Started                      pod/up-down-3-2xfxs                                                         Started container up-down-3\nservices-7709                        3m19s       Normal    Scheduled                    pod/up-down-3-tjbs6                                                         Successfully assigned services-7709/up-down-3-tjbs6 to bootstrap-e2e-minion-group-1s6w\nservices-7709                        3m12s       Normal    Pulled                       pod/up-down-3-tjbs6                                                         Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-7709                        3m12s       Normal    Created                      pod/up-down-3-tjbs6                                                         Created container up-down-3\nservices-7709                        3m11s       Normal    Started                      pod/up-down-3-tjbs6                                                         Started container up-down-3\nservices-7709                        3m18s       Normal    Scheduled                    pod/up-down-3-w76bx                                                         Successfully assigned services-7709/up-down-3-w76bx to bootstrap-e2e-minion-group-5wn8\nservices-7709                        3m16s       Normal    Pulled                       pod/up-down-3-w76bx                                                         Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-7709                        3m16s       Normal    Created                      pod/up-down-3-w76bx                                                         Created container up-down-3\nservices-7709                        3m15s       Normal    Started                      pod/up-down-3-w76bx                                                         Started container up-down-3\nservices-7709                        3m19s       Normal    SuccessfulCreate             replicationcontroller/up-down-3                                             Created pod: up-down-3-2xfxs\nservices-7709                        3m19s       Normal    SuccessfulCreate             replicationcontroller/up-down-3                                             Created pod: up-down-3-tjbs6\nservices-7709                        3m19s       Normal    SuccessfulCreate             replicationcontroller/up-down-3                                             Created pod: up-down-3-w76bx\nstatefulset-289                      4m51s       Normal    Scheduled                    pod/ss2-0                                                                   Successfully assigned statefulset-289/ss2-0 to bootstrap-e2e-minion-group-5wn8\nstatefulset-289                      4m46s       Normal    Pulled                       pod/ss2-0                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-289                      4m46s       Normal    Created                      pod/ss2-0                                                                   Created container webserver\nstatefulset-289                      4m46s       Normal    Started                      pod/ss2-0                                                                   Started container webserver\nstatefulset-289                      3m46s       Normal    Killing                      pod/ss2-0                                                                   Stopping container webserver\nstatefulset-289                      3m45s       Normal    Scheduled                    pod/ss2-0                                                                   Successfully assigned statefulset-289/ss2-0 to bootstrap-e2e-minion-group-7htw\nstatefulset-289                      3m41s       Normal    Pulled                       pod/ss2-0                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-289                      3m40s       Normal    Created                      pod/ss2-0                                                                   Created container webserver\nstatefulset-289                      3m40s       Normal    Started                      pod/ss2-0                                                                   Started container webserver\nstatefulset-289                      2m59s       Normal    Killing                      pod/ss2-0                                                                   Stopping container webserver\nstatefulset-289                      2m57s       Warning   Unhealthy                    pod/ss2-0                                                                   Readiness probe failed: Get http://10.64.1.247:80/index.html: dial tcp 10.64.1.247:80: connect: connection refused\nstatefulset-289                      2m43s       Normal    Scheduled                    pod/ss2-0                                                                   Successfully assigned statefulset-289/ss2-0 to bootstrap-e2e-minion-group-7htw\nstatefulset-289                      2m39s       Normal    Pulled                       pod/ss2-0                                                                   Container image \"docker.io/library/httpd:2.4.39-alpine\" already present on machine\nstatefulset-289                      2m39s       Normal    Created                      pod/ss2-0                                                                   Created container webserver\nstatefulset-289                      2m37s       Normal    Started                      pod/ss2-0                                                                   Started container webserver\nstatefulset-289                      108s        Normal    Killing                      pod/ss2-0                                                                   Stopping container webserver\nstatefulset-289                      106s        Warning   Unhealthy                    pod/ss2-0                                                                   Readiness probe failed: Get http://10.64.1.10:80/index.html: net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nstatefulset-289                      106s        Warning   Unhealthy                    pod/ss2-0                                                                   Readiness probe failed: Get http://10.64.1.10:80/index.html: read tcp 10.64.1.1:33066->10.64.1.10:80: read: connection reset by peer\nstatefulset-289                      105s        Warning   Unhealthy                    pod/ss2-0                                                                   Readiness probe failed: Get http://10.64.1.10:80/index.html: dial tcp 10.64.1.10:80: connect: connection refused\nstatefulset-289                      4m43s       Normal    Scheduled                    pod/ss2-1                                                                   Successfully assigned statefulset-289/ss2-1 to bootstrap-e2e-minion-group-dwjn\nstatefulset-289                      4m41s       Warning   FailedMount                  pod/ss2-1                                                                   MountVolume.SetUp failed for volume \"default-token-wt9g7\" : failed to sync secret cache: timed out waiting for the condition\nstatefulset-289                      4m40s       Normal    Pulled                       pod/ss2-1                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-289                      4m40s       Normal    Created                      pod/ss2-1                                                                   Created container webserver\nstatefulset-289                      4m40s       Normal    Started                      pod/ss2-1                                                                   Started container webserver\nstatefulset-289                      3m12s       Normal    Killing                      pod/ss2-1                                                                   Stopping container webserver\nstatefulset-289                      3m6s        Normal    Scheduled                    pod/ss2-1                                                                   Successfully assigned statefulset-289/ss2-1 to bootstrap-e2e-minion-group-dwjn\nstatefulset-289                      3m4s        Normal    Pulled                       pod/ss2-1                                                                   Container image \"docker.io/library/httpd:2.4.39-alpine\" already present on machine\nstatefulset-289                      3m3s        Normal    Created                      pod/ss2-1                                                                   Created container webserver\nstatefulset-289                      3m2s        Normal    Started                      pod/ss2-1                                                                   Started container webserver\nstatefulset-289                      113s        Normal    Killing                      pod/ss2-1                                                                   Stopping container webserver\nstatefulset-289                      4m34s       Normal    Scheduled                    pod/ss2-2                                                                   Successfully assigned statefulset-289/ss2-2 to bootstrap-e2e-minion-group-1s6w\nstatefulset-289                      4m32s       Normal    Pulled                       pod/ss2-2                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-289                      4m32s       Normal    Created                      pod/ss2-2                                                                   Created container webserver\nstatefulset-289                      4m31s       Normal    Started                      pod/ss2-2                                                                   Started container webserver\nstatefulset-289                      4m8s        Normal    Killing                      pod/ss2-2                                                                   Stopping container webserver\nstatefulset-289                      4m7s        Warning   Unhealthy                    pod/ss2-2                                                                   Readiness probe failed: Get http://10.64.3.208:80/index.html: dial tcp 10.64.3.208:80: connect: connection refused\nstatefulset-289                      3m57s       Normal    Scheduled                    pod/ss2-2                                                                   Successfully assigned statefulset-289/ss2-2 to bootstrap-e2e-minion-group-1s6w\nstatefulset-289                      3m52s       Normal    Pulled                       pod/ss2-2                                                                   Container image \"docker.io/library/httpd:2.4.39-alpine\" already present on machine\nstatefulset-289                      3m52s       Normal    Created                      pod/ss2-2                                                                   Created container webserver\nstatefulset-289                      3m50s       Normal    Started                      pod/ss2-2                                                                   Started container webserver\nstatefulset-289                      3m45s       Normal    Killing                      pod/ss2-2                                                                   Stopping container webserver\nstatefulset-289                      3m36s       Normal    Scheduled                    pod/ss2-2                                                                   Successfully assigned statefulset-289/ss2-2 to bootstrap-e2e-minion-group-5wn8\nstatefulset-289                      3m34s       Normal    Pulling                      pod/ss2-2                                                                   Pulling image \"docker.io/library/httpd:2.4.39-alpine\"\nstatefulset-289                      3m29s       Normal    Pulled                       pod/ss2-2                                                                   Successfully pulled image \"docker.io/library/httpd:2.4.39-alpine\"\nstatefulset-289                      3m28s       Normal    Created                      pod/ss2-2                                                                   Created container webserver\nstatefulset-289                      3m27s       Normal    Started                      pod/ss2-2                                                                   Started container webserver\nstatefulset-289                      2m8s        Normal    Killing                      pod/ss2-2                                                                   Stopping container webserver\nstatefulset-289                      2m7s        Warning   Unhealthy                    pod/ss2-2                                                                   Readiness probe failed: Get http://10.64.4.219:80/index.html: dial tcp 10.64.4.219:80: connect: connection refused\nstatefulset-289                      2m44s       Normal    SuccessfulCreate             statefulset/ss2                                                             create Pod ss2-0 in StatefulSet ss2 successful\nstatefulset-289                      3m6s        Normal    SuccessfulCreate             statefulset/ss2                                                             create Pod ss2-1 in StatefulSet ss2 successful\nstatefulset-289                      3m36s       Normal    SuccessfulCreate             statefulset/ss2                                                             create Pod ss2-2 in StatefulSet ss2 successful\nstatefulset-289                      2m8s        Normal    SuccessfulDelete             statefulset/ss2                                                             delete Pod ss2-2 in StatefulSet ss2 successful\nstatefulset-289                      113s        Normal    SuccessfulDelete             statefulset/ss2                                                             delete Pod ss2-1 in StatefulSet ss2 successful\nstatefulset-289                      108s        Normal    SuccessfulDelete             statefulset/ss2                                                             delete Pod ss2-0 in StatefulSet ss2 successful\nstatefulset-289                      4m42s       Warning   FailedToUpdateEndpoint       endpoints/test                                                              Failed to update endpoint statefulset-289/test: Operation cannot be fulfilled on endpoints \"test\": the object has been modified; please apply your changes to the latest version and try again\nstatefulset-4270                     9m43s       Normal    ProvisioningSucceeded        persistentvolumeclaim/datadir-ss-0                                          Successfully provisioned volume pvc-94faaa74-facf-4a2e-be7b-43fe395aab19 using kubernetes.io/gce-pd\nstatefulset-4270                     9m22s       Normal    ProvisioningSucceeded        persistentvolumeclaim/datadir-ss-1                                          Successfully provisioned volume pvc-0eddf19c-455a-4e7a-9ae4-16230b272d0a using kubernetes.io/gce-pd\nstatefulset-4270                     8m51s       Normal    ProvisioningSucceeded        persistentvolumeclaim/datadir-ss-2                                          Successfully provisioned volume pvc-dfc9337e-5d47-4a7e-9569-49f174d08c45 using kubernetes.io/gce-pd\nstatefulset-4270                     9m43s       Warning   FailedScheduling             pod/ss-0                                                                    running \"VolumeBinding\" filter plugin for pod \"ss-0\": pod has unbound immediate PersistentVolumeClaims\nstatefulset-4270                     9m40s       Normal    Scheduled                    pod/ss-0                                                                    Successfully assigned statefulset-4270/ss-0 to bootstrap-e2e-minion-group-7htw\nstatefulset-4270                     9m32s       Normal    SuccessfulAttachVolume       pod/ss-0                                                                    AttachVolume.Attach succeeded for volume \"pvc-94faaa74-facf-4a2e-be7b-43fe395aab19\"\nstatefulset-4270                     9m28s       Normal    Pulled                       pod/ss-0                                                                    Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-4270                     9m28s       Normal    Created                      pod/ss-0                                                                    Created container webserver\nstatefulset-4270                     9m28s       Normal    Started                      pod/ss-0                                                                    Started container webserver\nstatefulset-4270                     5m57s       Normal    Killing                      pod/ss-0                                                                    Stopping container webserver\nstatefulset-4270                     5m44s       Normal    Scheduled                    pod/ss-0                                                                    Successfully assigned statefulset-4270/ss-0 to bootstrap-e2e-minion-group-7htw\nstatefulset-4270                     5m15s       Normal    Pulling                      pod/ss-0                                                                    Pulling image \"docker.io/library/httpd:2.4.39-alpine\"\nstatefulset-4270                     4m50s       Normal    Pulled                       pod/ss-0                                                                    Successfully pulled image \"docker.io/library/httpd:2.4.39-alpine\"\nstatefulset-4270                     4m50s       Normal    Created                      pod/ss-0                                                                    Created container webserver\nstatefulset-4270                     4m50s       Normal    Started                      pod/ss-0                                                                    Started container webserver\nstatefulset-4270                     2m54s       Normal    Killing                      pod/ss-0                                                                    Stopping container webserver\nstatefulset-4270                     2m50s       Warning   Unhealthy                    pod/ss-0                                                                    Readiness probe failed: Get http://10.64.1.231:80/index.html: dial tcp 10.64.1.231:80: connect: connection refused\nstatefulset-4270                     2m34s       Normal    Scheduled                    pod/ss-0                                                                    Successfully assigned statefulset-4270/ss-0 to bootstrap-e2e-minion-group-dwjn\nstatefulset-4270                     2m28s       Normal    SuccessfulAttachVolume       pod/ss-0                                                                    AttachVolume.Attach succeeded for volume \"pvc-94faaa74-facf-4a2e-be7b-43fe395aab19\"\nstatefulset-4270                     2m23s       Normal    Pulled                       pod/ss-0                                                                    Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-4270                     2m23s       Normal    Created                      pod/ss-0                                                                    Created container webserver\nstatefulset-4270                     2m23s       Normal    Started                      pod/ss-0                                                                    Started container webserver\nstatefulset-4270                     110s        Normal    Killing                      pod/ss-0                                                                    Stopping container webserver\nstatefulset-4270                     9m23s       Warning   FailedScheduling             pod/ss-1                                                                    running \"VolumeBinding\" filter plugin for pod \"ss-1\": pod has unbound immediate PersistentVolumeClaims\nstatefulset-4270                     9m19s       Normal    Scheduled                    pod/ss-1                                                                    Successfully assigned statefulset-4270/ss-1 to bootstrap-e2e-minion-group-1s6w\nstatefulset-4270                     9m19s       Warning   FailedMount                  pod/ss-1                                                                    Unable to attach or mount volumes: unmounted volumes=[home default-token-jchlg datadir], unattached volumes=[home default-token-jchlg datadir]: error processing PVC statefulset-4270/datadir-ss-1: failed to fetch PVC from API server: persistentvolumeclaims \"datadir-ss-1\" is forbidden: User \"system:node:bootstrap-e2e-minion-group-1s6w\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"statefulset-4270\": no relationship found between node \"bootstrap-e2e-minion-group-1s6w\" and this object\nstatefulset-4270                     9m13s       Normal    SuccessfulAttachVolume       pod/ss-1                                                                    AttachVolume.Attach succeeded for volume \"pvc-0eddf19c-455a-4e7a-9ae4-16230b272d0a\"\nstatefulset-4270                     8m58s       Normal    Pulled                       pod/ss-1                                                                    Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-4270                     8m58s       Normal    Created                      pod/ss-1                                                                    Created container webserver\nstatefulset-4270                     8m56s       Normal    Started                      pod/ss-1                                                                    Started container webserver\nstatefulset-4270                     7m56s       Warning   Unhealthy                    pod/ss-1                                                                    Readiness probe failed: HTTP probe failed with statuscode: 404\nstatefulset-4270                     7m13s       Normal    Killing                      pod/ss-1                                                                    Stopping container webserver\nstatefulset-4270                     7m13s       Warning   Unhealthy                    pod/ss-1                                                                    Readiness probe failed: Get http://10.64.3.132:80/index.html: dial tcp 10.64.3.132:80: connect: connection refused\nstatefulset-4270                     6m55s       Normal    Scheduled                    pod/ss-1                                                                    Successfully assigned statefulset-4270/ss-1 to bootstrap-e2e-minion-group-1s6w\nstatefulset-4270                     6m55s       Warning   FailedMount                  pod/ss-1                                                                    Unable to attach or mount volumes: unmounted volumes=[default-token-jchlg datadir home], unattached volumes=[default-token-jchlg datadir home]: error processing PVC statefulset-4270/datadir-ss-1: failed to fetch PVC from API server: persistentvolumeclaims \"datadir-ss-1\" is forbidden: User \"system:node:bootstrap-e2e-minion-group-1s6w\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"statefulset-4270\": no relationship found between node \"bootstrap-e2e-minion-group-1s6w\" and this object\nstatefulset-4270                     6m49s       Normal    SuccessfulAttachVolume       pod/ss-1                                                                    AttachVolume.Attach succeeded for volume \"pvc-0eddf19c-455a-4e7a-9ae4-16230b272d0a\"\nstatefulset-4270                     6m39s       Normal    Pulling                      pod/ss-1                                                                    Pulling image \"docker.io/library/httpd:2.4.39-alpine\"\nstatefulset-4270                     6m3s        Normal    Pulled                       pod/ss-1                                                                    Successfully pulled image \"docker.io/library/httpd:2.4.39-alpine\"\nstatefulset-4270                     6m3s        Normal    Created                      pod/ss-1                                                                    Created container webserver\nstatefulset-4270                     6m          Normal    Started                      pod/ss-1                                                                    Started container webserver\nstatefulset-4270                     4m19s       Warning   Unhealthy                    pod/ss-1                                                                    Readiness probe failed: HTTP probe failed with statuscode: 404\nstatefulset-4270                     3m25s       Normal    Scheduled                    pod/ss-1                                                                    Successfully assigned statefulset-4270/ss-1 to bootstrap-e2e-minion-group-1s6w\nstatefulset-4270                     3m24s       Warning   FailedMount                  pod/ss-1                                                                    Unable to attach or mount volumes: unmounted volumes=[datadir default-token-jchlg], unattached volumes=[datadir home default-token-jchlg]: error processing PVC statefulset-4270/datadir-ss-1: failed to fetch PVC from API server: persistentvolumeclaims \"datadir-ss-1\" is forbidden: User \"system:node:bootstrap-e2e-minion-group-1s6w\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"statefulset-4270\": no relationship found between node \"bootstrap-e2e-minion-group-1s6w\" and this object\nstatefulset-4270                     3m14s       Normal    SuccessfulAttachVolume       pod/ss-1                                                                    AttachVolume.Attach succeeded for volume \"pvc-0eddf19c-455a-4e7a-9ae4-16230b272d0a\"\nstatefulset-4270                     3m3s        Normal    Pulled                       pod/ss-1                                                                    Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-4270                     3m3s        Normal    Created                      pod/ss-1                                                                    Created container webserver\nstatefulset-4270                     3m          Normal    Started                      pod/ss-1                                                                    Started container webserver\nstatefulset-4270                     2m2s        Normal    Killing                      pod/ss-1                                                                    Stopping container webserver\nstatefulset-4270                     2m1s        Warning   Unhealthy                    pod/ss-1                                                                    Readiness probe failed: Get http://10.64.3.233:80/index.html: dial tcp 10.64.3.233:80: connect: connection refused\nstatefulset-4270                     8m53s       Warning   FailedScheduling             pod/ss-2                                                                    running \"VolumeBinding\" filter plugin for pod \"ss-2\": pod has unbound immediate PersistentVolumeClaims\nstatefulset-4270                     8m49s       Normal    Scheduled                    pod/ss-2                                                                    Successfully assigned statefulset-4270/ss-2 to bootstrap-e2e-minion-group-dwjn\nstatefulset-4270                     8m49s       Warning   FailedMount                  pod/ss-2                                                                    Unable to attach or mount volumes: unmounted volumes=[datadir home default-token-jchlg], unattached volumes=[datadir home default-token-jchlg]: error processing PVC statefulset-4270/datadir-ss-2: failed to fetch PVC from API server: persistentvolumeclaims \"datadir-ss-2\" is forbidden: User \"system:node:bootstrap-e2e-minion-group-dwjn\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"statefulset-4270\": no relationship found between node \"bootstrap-e2e-minion-group-dwjn\" and this object\nstatefulset-4270                     8m43s       Normal    SuccessfulAttachVolume       pod/ss-2                                                                    AttachVolume.Attach succeeded for volume \"pvc-dfc9337e-5d47-4a7e-9569-49f174d08c45\"\nstatefulset-4270                     8m31s       Normal    Pulled                       pod/ss-2                                                                    Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-4270                     8m31s       Normal    Created                      pod/ss-2                                                                    Created container webserver\nstatefulset-4270                     8m30s       Normal    Started                      pod/ss-2                                                                    Started container webserver\nstatefulset-4270                     7m52s       Normal    Killing                      pod/ss-2                                                                    Stopping container webserver\nstatefulset-4270                     7m52s       Warning   Unhealthy                    pod/ss-2                                                                    Readiness probe failed: Get http://10.64.2.162:80/index.html: dial tcp 10.64.2.162:80: connect: connection refused\nstatefulset-4270                     7m49s       Normal    Scheduled                    pod/ss-2                                                                    Successfully assigned statefulset-4270/ss-2 to bootstrap-e2e-minion-group-dwjn\nstatefulset-4270                     7m38s       Normal    Pulling                      pod/ss-2                                                                    Pulling image \"docker.io/library/httpd:2.4.39-alpine\"\nstatefulset-4270                     7m18s       Normal    Pulled                       pod/ss-2                                                                    Successfully pulled image \"docker.io/library/httpd:2.4.39-alpine\"\nstatefulset-4270                     7m18s       Normal    Created                      pod/ss-2                                                                    Created container webserver\nstatefulset-4270                     7m17s       Normal    Started                      pod/ss-2                                                                    Started container webserver\nstatefulset-4270                     4m6s        Normal    Killing                      pod/ss-2                                                                    Stopping container webserver\nstatefulset-4270                     4m1s        Normal    Scheduled                    pod/ss-2                                                                    Successfully assigned statefulset-4270/ss-2 to bootstrap-e2e-minion-group-5wn8\nstatefulset-4270                     3m49s       Normal    SuccessfulAttachVolume       pod/ss-2                                                                    AttachVolume.Attach succeeded for volume \"pvc-dfc9337e-5d47-4a7e-9569-49f174d08c45\"\nstatefulset-4270                     3m41s       Normal    Pulled                       pod/ss-2                                                                    Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-4270                     3m41s       Normal    Created                      pod/ss-2                                                                    Created container webserver\nstatefulset-4270                     3m41s       Normal    Started                      pod/ss-2                                                                    Started container webserver\nstatefulset-4270                     2m8s        Normal    Killing                      pod/ss-2                                                                    Stopping container webserver\nstatefulset-4270                     2m7s        Warning   Unhealthy                    pod/ss-2                                                                    Readiness probe failed: Get http://10.64.4.218:80/index.html: dial tcp 10.64.4.218:80: connect: connection refused\nstatefulset-4270                     9m46s       Normal    SuccessfulCreate             statefulset/ss                                                              create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success\nstatefulset-4270                     2m34s       Normal    SuccessfulCreate             statefulset/ss                                                              create Pod ss-0 in StatefulSet ss successful\nstatefulset-4270                     9m25s       Normal    SuccessfulCreate             statefulset/ss                                                              create Claim datadir-ss-1 Pod ss-1 in StatefulSet ss success\nstatefulset-4270                     3m25s       Normal    SuccessfulCreate             statefulset/ss                                                              create Pod ss-1 in StatefulSet ss successful\nstatefulset-4270                     8m54s       Normal    SuccessfulCreate             statefulset/ss                                                              create Claim datadir-ss-2 Pod ss-2 in StatefulSet ss success\nstatefulset-4270                     4m1s        Normal    SuccessfulCreate             statefulset/ss                                                              create Pod ss-2 in StatefulSet ss successful\nstatefulset-4270                     2m8s        Normal    SuccessfulDelete             statefulset/ss                                                              delete Pod ss-2 in StatefulSet ss successful\nstatefulset-4270                     2m3s        Normal    SuccessfulDelete             statefulset/ss                                                              delete Pod ss-1 in StatefulSet ss successful\nstatefulset-4270                     109s        Normal    SuccessfulDelete             statefulset/ss                                                              delete Pod ss-0 in StatefulSet ss successful\nstatefulset-6251                     102s        Warning   NodePorts                    pod/ss-0                                                                    Predicate NodePorts failed\nstatefulset-6251                     92s         Warning   NodePorts                    pod/ss-0                                                                    Predicate NodePorts failed\nstatefulset-6251                     78s         Warning   NodePorts                    pod/ss-0                                                                    Predicate NodePorts failed\nstatefulset-6251                     69s         Normal    Pulled                       pod/ss-0                                                                    Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-6251                     69s         Normal    Created                      pod/ss-0                                                                    Created container webserver\nstatefulset-6251                     68s         Normal    Started                      pod/ss-0                                                                    Started container webserver\nstatefulset-6251                     49s         Normal    Killing                      pod/ss-0                                                                    Stopping container webserver\nstatefulset-6251                     70s         Normal    SuccessfulCreate             statefulset/ss                                                              create Pod ss-0 in StatefulSet ss successful\nstatefulset-6251                     72s         Warning   RecreatingFailedPod          statefulset/ss                                                              StatefulSet statefulset-6251/ss is recreating failed Pod ss-0\nstatefulset-6251                     49s         Normal    SuccessfulDelete             statefulset/ss                                                              delete Pod ss-0 in StatefulSet ss successful\nstatefulset-6251                     70s         Warning   FailedCreate                 statefulset/ss                                                              create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.\nstatefulset-6251                     98s         Normal    Pulled                       pod/test-pod                                                                Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-6251                     98s         Normal    Created                      pod/test-pod                                                                Created container webserver\nstatefulset-6251                     95s         Normal    Started                      pod/test-pod                                                                Started container webserver\nstatefulset-6251                     78s         Normal    Killing                      pod/test-pod                                                                Stopping container webserver\nstatefulset-7464                     2m18s       Normal    Scheduled                    pod/ss2-0                                                                   Successfully assigned statefulset-7464/ss2-0 to bootstrap-e2e-minion-group-7htw\nstatefulset-7464                     2m14s       Normal    Pulled                       pod/ss2-0                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-7464                     2m14s       Normal    Created                      pod/ss2-0                                                                   Created container webserver\nstatefulset-7464                     2m14s       Normal    Started                      pod/ss2-0                                                                   Started container webserver\nstatefulset-7464                     42s         Normal    Killing                      pod/ss2-0                                                                   Stopping container webserver\nstatefulset-7464                     40s         Warning   Unhealthy                    pod/ss2-0                                                                   Readiness probe failed: Get http://10.64.1.17:80/index.html: dial tcp 10.64.1.17:80: connect: connection refused\nstatefulset-7464                     24s         Normal    Scheduled                    pod/ss2-0                                                                   Successfully assigned statefulset-7464/ss2-0 to bootstrap-e2e-minion-group-7htw\nstatefulset-7464                     17s         Normal    Pulled                       pod/ss2-0                                                                   Container image \"docker.io/library/httpd:2.4.39-alpine\" already present on machine\nstatefulset-7464                     17s         Normal    Created                      pod/ss2-0                                                                   Created container webserver\nstatefulset-7464                     15s         Normal    Started                      pod/ss2-0                                                                   Started container webserver\nstatefulset-7464                     2m6s        Normal    Scheduled                    pod/ss2-1                                                                   Successfully assigned statefulset-7464/ss2-1 to bootstrap-e2e-minion-group-1s6w\nstatefulset-7464                     2m4s        Normal    Pulled                       pod/ss2-1                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-7464                     2m4s        Normal    Created                      pod/ss2-1                                                                   Created container webserver\nstatefulset-7464                     2m3s        Normal    Started                      pod/ss2-1                                                                   Started container webserver\nstatefulset-7464                     90s         Warning   Unhealthy                    pod/ss2-1                                                                   Readiness probe failed: HTTP probe failed with statuscode: 404\nstatefulset-7464                     68s         Normal    Killing                      pod/ss2-1                                                                   Stopping container webserver\nstatefulset-7464                     66s         Warning   Unhealthy                    pod/ss2-1                                                                   Readiness probe failed: Get http://10.64.3.241:80/index.html: dial tcp 10.64.3.241:80: connect: connection refused\nstatefulset-7464                     53s         Normal    Scheduled                    pod/ss2-1                                                                   Successfully assigned statefulset-7464/ss2-1 to bootstrap-e2e-minion-group-1s6w\nstatefulset-7464                     51s         Warning   FailedMount                  pod/ss2-1                                                                   MountVolume.SetUp failed for volume \"default-token-smmm5\" : failed to sync secret cache: timed out waiting for the condition\nstatefulset-7464                     49s         Normal    Pulled                       pod/ss2-1                                                                   Container image \"docker.io/library/httpd:2.4.39-alpine\" already present on machine\nstatefulset-7464                     49s         Normal    Created                      pod/ss2-1                                                                   Created container webserver\nstatefulset-7464                     46s         Normal    Started                      pod/ss2-1                                                                   Started container webserver\nstatefulset-7464                     115s        Normal    Scheduled                    pod/ss2-2                                                                   Successfully assigned statefulset-7464/ss2-2 to bootstrap-e2e-minion-group-5wn8\nstatefulset-7464                     113s        Normal    Pulled                       pod/ss2-2                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-7464                     113s        Normal    Created                      pod/ss2-2                                                                   Created container webserver\nstatefulset-7464                     113s        Normal    Started                      pod/ss2-2                                                                   Started container webserver\nstatefulset-7464                     88s         Normal    Killing                      pod/ss2-2                                                                   Stopping container webserver\nstatefulset-7464                     75s         Normal    Scheduled                    pod/ss2-2                                                                   Successfully assigned statefulset-7464/ss2-2 to bootstrap-e2e-minion-group-5wn8\nstatefulset-7464                     74s         Warning   FailedMount                  pod/ss2-2                                                                   MountVolume.SetUp failed for volume \"default-token-smmm5\" : failed to sync secret cache: timed out waiting for the condition\nstatefulset-7464                     72s         Normal    Pulled                       pod/ss2-2                                                                   Container image \"docker.io/library/httpd:2.4.39-alpine\" already present on machine\nstatefulset-7464                     71s         Normal    Created                      pod/ss2-2                                                                   Created container webserver\nstatefulset-7464                     71s         Normal    Started                      pod/ss2-2                                                                   Started container webserver\nstatefulset-7464                     24s         Normal    SuccessfulCreate             statefulset/ss2                                                             create Pod ss2-0 in StatefulSet ss2 successful\nstatefulset-7464                     53s         Normal    SuccessfulCreate             statefulset/ss2                                                             create Pod ss2-1 in StatefulSet ss2 successful\nstatefulset-7464                     76s         Normal    SuccessfulCreate             statefulset/ss2                                                             create Pod ss2-2 in StatefulSet ss2 successful\nstatefulset-7464                     88s         Normal    SuccessfulDelete             statefulset/ss2                                                             delete Pod ss2-2 in StatefulSet ss2 successful\nstatefulset-7464                     68s         Normal    SuccessfulDelete             statefulset/ss2                                                             delete Pod ss2-1 in StatefulSet ss2 successful\nstatefulset-7464                     42s         Normal    SuccessfulDelete             statefulset/ss2                                                             delete Pod ss2-0 in StatefulSet ss2 successful\nstatefulset-7464                     2m6s        Warning   FailedToUpdateEndpoint       endpoints/test                                                              Failed to update endpoint statefulset-7464/test: Operation cannot be fulfilled on endpoints \"test\": the object has been modified; please apply your changes to the latest version and try again\nstatefulset-8127                     95s         Normal    Scheduled                    pod/ss2-0                                                                   Successfully assigned statefulset-8127/ss2-0 to bootstrap-e2e-minion-group-dwjn\nstatefulset-8127                     93s         Normal    Pulled                       pod/ss2-0                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-8127                     93s         Normal    Created                      pod/ss2-0                                                                   Created container webserver\nstatefulset-8127                     92s         Normal    Started                      pod/ss2-0                                                                   Started container webserver\nstatefulset-8127                     52s         Normal    Killing                      pod/ss2-0                                                                   Stopping container webserver\nstatefulset-8127                     53s         Warning   Unhealthy                    pod/ss2-0                                                                   Readiness probe failed: Get http://10.64.2.12:80/index.html: dial tcp 10.64.2.12:80: connect: connection refused\nstatefulset-8127                     51s         Normal    Scheduled                    pod/ss2-0                                                                   Successfully assigned statefulset-8127/ss2-0 to bootstrap-e2e-minion-group-7htw\nstatefulset-8127                     48s         Normal    Pulled                       pod/ss2-0                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-8127                     48s         Normal    Created                      pod/ss2-0                                                                   Created container webserver\nstatefulset-8127                     47s         Normal    Started                      pod/ss2-0                                                                   Started container webserver\nstatefulset-8127                     7s          Normal    Killing                      pod/ss2-0                                                                   Stopping container webserver\nstatefulset-8127                     7s          Warning   Unhealthy                    pod/ss2-0                                                                   Readiness probe failed: Get http://10.64.1.46:80/index.html: dial tcp 10.64.1.46:80: connect: connection refused\nstatefulset-8127                     7s          Normal    Scheduled                    pod/ss2-0                                                                   Successfully assigned statefulset-8127/ss2-0 to bootstrap-e2e-minion-group-dwjn\nstatefulset-8127                     5s          Normal    Pulled                       pod/ss2-0                                                                   Container image \"docker.io/library/httpd:2.4.39-alpine\" already present on machine\nstatefulset-8127                     5s          Normal    Created                      pod/ss2-0                                                                   Created container webserver\nstatefulset-8127                     5s          Normal    Started                      pod/ss2-0                                                                   Started container webserver\nstatefulset-8127                     90s         Normal    Scheduled                    pod/ss2-1                                                                   Successfully assigned statefulset-8127/ss2-1 to bootstrap-e2e-minion-group-7htw\nstatefulset-8127                     86s         Normal    Pulled                       pod/ss2-1                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-8127                     86s         Normal    Created                      pod/ss2-1                                                                   Created container webserver\nstatefulset-8127                     86s         Normal    Started                      pod/ss2-1                                                                   Started container webserver\nstatefulset-8127                     52s         Normal    Killing                      pod/ss2-1                                                                   Stopping container webserver\nstatefulset-8127                     39s         Normal    Scheduled                    pod/ss2-1                                                                   Successfully assigned statefulset-8127/ss2-1 to bootstrap-e2e-minion-group-5wn8\nstatefulset-8127                     39s         Normal    Pulled                       pod/ss2-1                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-8127                     38s         Normal    Created                      pod/ss2-1                                                                   Created container webserver\nstatefulset-8127                     38s         Normal    Started                      pod/ss2-1                                                                   Started container webserver\nstatefulset-8127                     7s          Normal    Killing                      pod/ss2-1                                                                   Stopping container webserver\nstatefulset-8127                     74s         Normal    Scheduled                    pod/ss2-2                                                                   Successfully assigned statefulset-8127/ss2-2 to bootstrap-e2e-minion-group-1s6w\nstatefulset-8127                     68s         Normal    Pulled                       pod/ss2-2                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-8127                     68s         Normal    Created                      pod/ss2-2                                                                   Created container webserver\nstatefulset-8127                     66s         Normal    Started                      pod/ss2-2                                                                   Started container webserver\nstatefulset-8127                     51s         Normal    Killing                      pod/ss2-2                                                                   Stopping container webserver\nstatefulset-8127                     36s         Normal    Scheduled                    pod/ss2-2                                                                   Successfully assigned statefulset-8127/ss2-2 to bootstrap-e2e-minion-group-1s6w\nstatefulset-8127                     30s         Normal    Pulled                       pod/ss2-2                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-8127                     30s         Normal    Created                      pod/ss2-2                                                                   Created container webserver\nstatefulset-8127                     28s         Normal    Started                      pod/ss2-2                                                                   Started container webserver\nstatefulset-8127                     6s          Normal    Killing                      pod/ss2-2                                                                   Stopping container webserver\nstatefulset-8127                     7s          Normal    SuccessfulCreate             statefulset/ss2                                                             create Pod ss2-0 in StatefulSet ss2 successful\nstatefulset-8127                     40s         Normal    SuccessfulCreate             statefulset/ss2                                                             create Pod ss2-1 in StatefulSet ss2 successful\nstatefulset-8127                     36s         Normal    SuccessfulCreate             statefulset/ss2                                                             create Pod ss2-2 in StatefulSet ss2 successful\nstatefulset-8127                     39s         Warning   FailedToUpdateEndpoint       endpoints/test                                                              Failed to update endpoint statefulset-8127/test: Operation cannot be fulfilled on endpoints \"test\": the object has been modified; please apply your changes to the latest version and try again\nsvc-latency-7096                     2m8s        Normal    Scheduled                    pod/svc-latency-rc-7ckm2                                                    Successfully assigned svc-latency-7096/svc-latency-rc-7ckm2 to bootstrap-e2e-minion-group-7htw\nsvc-latency-7096                     2m7s        Normal    Pulled                       pod/svc-latency-rc-7ckm2                                                    Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nsvc-latency-7096                     2m6s        Normal    Created                      pod/svc-latency-rc-7ckm2                                                    Created container svc-latency-rc\nsvc-latency-7096                     2m4s        Normal    Started                      pod/svc-latency-rc-7ckm2                                                    Started container svc-latency-rc\nsvc-latency-7096                     2m9s        Normal    SuccessfulCreate             replicationcontroller/svc-latency-rc                                        Created pod: svc-latency-rc-7ckm2\nvar-expansion-9626                   5m43s       Normal    Scheduled                    pod/var-expansion-e349f070-360f-4064-869c-98846c10b2e3                      Successfully assigned var-expansion-9626/var-expansion-e349f070-360f-4064-869c-98846c10b2e3 to bootstrap-e2e-minion-group-dwjn\nvar-expansion-9626                   5m40s       Normal    Pulled                       pod/var-expansion-e349f070-360f-4064-869c-98846c10b2e3                      Container image \"docker.io/library/busybox:1.29\" already present on machine\nvar-expansion-9626                   5m40s       Normal    Created                      pod/var-expansion-e349f070-360f-4064-869c-98846c10b2e3                      Created container dapi-container\nvar-expansion-9626                   5m39s       Normal    Started                      pod/var-expansion-e349f070-360f-4064-869c-98846c10b2e3                      Started container dapi-container\nvar-expansion-9778                   5m55s       Normal    Scheduled                    pod/var-expansion-5c809e53-cce7-40a1-9cea-ec1fdac8675c                      Successfully assigned var-expansion-9778/var-expansion-5c809e53-cce7-40a1-9cea-ec1fdac8675c to bootstrap-e2e-minion-group-dwjn\nvar-expansion-9778                   5m52s       Normal    Pulled                       pod/var-expansion-5c809e53-cce7-40a1-9cea-ec1fdac8675c                      Container image \"docker.io/library/busybox:1.29\" already present on machine\nvar-expansion-9778                   5m52s       Normal    Created                      pod/var-expansion-5c809e53-cce7-40a1-9cea-ec1fdac8675c                      Created container dapi-container\nvar-expansion-9778                   5m51s       Normal    Started                      pod/var-expansion-5c809e53-cce7-40a1-9cea-ec1fdac8675c                      Started container dapi-container\nvolume-1118                          4m22s       Normal    LeaderElection               endpoints/example.com-nfs-volume-1118                                       external-provisioner-zdtrf_6aa977a3-a455-4a58-b56c-bb2bd66c39d1 became leader\nvolume-1118                          3m54s       Normal    Scheduled                    pod/exec-volume-test-preprovisionedpv-kpfd                                  Successfully assigned volume-1118/exec-volume-test-preprovisionedpv-kpfd to bootstrap-e2e-minion-group-dwjn\nvolume-1118                          3m52s       Normal    Pulled                       pod/exec-volume-test-preprovisionedpv-kpfd                                  Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nvolume-1118                          3m52s       Normal    Created                      pod/exec-volume-test-preprovisionedpv-kpfd                                  Created container exec-container-preprovisionedpv-kpfd\nvolume-1118                          3m52s       Normal    Started                      pod/exec-volume-test-preprovisionedpv-kpfd                                  Started container exec-container-preprovisionedpv-kpfd\nvolume-1118                          4m32s       Normal    Scheduled                    pod/external-provisioner-zdtrf                                              Successfully assigned volume-1118/external-provisioner-zdtrf to bootstrap-e2e-minion-group-5wn8\nvolume-1118                          4m30s       Normal    Pulled                       pod/external-provisioner-zdtrf                                              Container image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\" already present on machine\nvolume-1118                          4m30s       Normal    Created                      pod/external-provisioner-zdtrf                                              Created container nfs-provisioner\nvolume-1118                          4m29s       Normal    Started                      pod/external-provisioner-zdtrf                                              Started container nfs-provisioner\nvolume-1118                          3m35s       Normal    Killing                      pod/external-provisioner-zdtrf                                              Stopping container nfs-provisioner\nvolume-1118                          4m17s       Normal    Scheduled                    pod/nfs-server                                                              Successfully assigned volume-1118/nfs-server to bootstrap-e2e-minion-group-dwjn\nvolume-1118                          4m16s       Normal    Pulled                       pod/nfs-server                                                              Container image \"gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0\" already present on machine\nvolume-1118                          4m16s       Normal    Created                      pod/nfs-server                                                              Created container nfs-server\nvolume-1118                          4m15s       Normal    Started                      pod/nfs-server                                                              Started container nfs-server\nvolume-1118                          3m46s       Normal    Killing                      pod/nfs-server                                                              Stopping container nfs-server\nvolume-1118                          4m10s       Warning   ProvisioningFailed           persistentvolumeclaim/pvc-7l4ln                                             storageclass.storage.k8s.io \"volume-1118\" not found\nvolume-1152                          93s         Normal    Pulled                       pod/exec-volume-test-preprovisionedpv-tjkz                                  Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nvolume-1152                          93s         Normal    Created                      pod/exec-volume-test-preprovisionedpv-tjkz                                  Created container exec-container-preprovisionedpv-tjkz\nvolume-1152                          92s         Normal    Started                      pod/exec-volume-test-preprovisionedpv-tjkz                                  Started container exec-container-preprovisionedpv-tjkz\nvolume-1152                          2m1s        Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-5wn8-5khbg                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolume-1152                          2m1s        Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-5khbg                          Created container agnhost\nvolume-1152                          2m          Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-5khbg                          Started container agnhost\nvolume-1152                          78s         Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-5khbg                          Stopping container agnhost\nvolume-1152                          112s        Warning   ProvisioningFailed           persistentvolumeclaim/pvc-5fnzd                                             storageclass.storage.k8s.io \"volume-1152\" not found\nvolume-1296                          2m45s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-7htw-ljfwd                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolume-1296                          2m45s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-7htw-ljfwd                          Created container agnhost\nvolume-1296                          2m44s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-7htw-ljfwd                          Started container agnhost\nvolume-1296                          50s         Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-7htw-ljfwd                          Stopping container agnhost\nvolume-1296                          98s         Normal    Pulled                       pod/local-client                                                            Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-1296                          98s         Normal    Created                      pod/local-client                                                            Created container local-client\nvolume-1296                          96s         Normal    Started                      pod/local-client                                                            Started container local-client\nvolume-1296                          76s         Normal    Killing                      pod/local-client                                                            Stopping container local-client\nvolume-1296                          2m20s       Normal    Pulled                       pod/local-injector                                                          Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-1296                          2m20s       Normal    Created                      pod/local-injector                                                          Created container local-injector\nvolume-1296                          2m18s       Normal    Started                      pod/local-injector                                                          Started container local-injector\nvolume-1296                          113s        Normal    Killing                      pod/local-injector                                                          Stopping container local-injector\nvolume-1296                          2m29s       Warning   ProvisioningFailed           persistentvolumeclaim/pvc-vfdb9                                             storageclass.storage.k8s.io \"volume-1296\" not found\nvolume-1580                          5m42s       Normal    Pulling                      pod/csi-hostpath-attacher-0                                                 Pulling image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\nvolume-1580                          5m26s       Normal    Pulled                       pod/csi-hostpath-attacher-0                                                 Successfully pulled image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\nvolume-1580                          5m25s       Normal    Created                      pod/csi-hostpath-attacher-0                                                 Created container csi-attacher\nvolume-1580                          5m20s       Normal    Started                      pod/csi-hostpath-attacher-0                                                 Started container csi-attacher\nvolume-1580                          2m55s       Normal    Killing                      pod/csi-hostpath-attacher-0                                                 Stopping container csi-attacher\nvolume-1580                          5m51s       Warning   FailedCreate                 statefulset/csi-hostpath-attacher                                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-1580                          5m48s       Normal    SuccessfulCreate             statefulset/csi-hostpath-attacher                                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nvolume-1580                          5m44s       Normal    Pulled                       pod/csi-hostpath-provisioner-0                                              Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nvolume-1580                          5m43s       Normal    Created                      pod/csi-hostpath-provisioner-0                                              Created container csi-provisioner\nvolume-1580                          5m40s       Normal    Started                      pod/csi-hostpath-provisioner-0                                              Started container csi-provisioner\nvolume-1580                          2m52s       Warning   FailedMount                  pod/csi-hostpath-provisioner-0                                              MountVolume.SetUp failed for volume \"csi-provisioner-token-fbmht\" : secret \"csi-provisioner-token-fbmht\" not found\nvolume-1580                          5m51s       Warning   FailedCreate                 statefulset/csi-hostpath-provisioner                                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-1580                          5m50s       Normal    SuccessfulCreate             statefulset/csi-hostpath-provisioner                                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nvolume-1580                          5m44s       Normal    Pulled                       pod/csi-hostpath-resizer-0                                                  Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\nvolume-1580                          5m43s       Normal    Created                      pod/csi-hostpath-resizer-0                                                  Created container csi-resizer\nvolume-1580                          5m40s       Normal    Started                      pod/csi-hostpath-resizer-0                                                  Started container csi-resizer\nvolume-1580                          5m50s       Warning   FailedCreate                 statefulset/csi-hostpath-resizer                                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-1580                          5m50s       Normal    SuccessfulCreate             statefulset/csi-hostpath-resizer                                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nvolume-1580                          5m27s       Normal    ExternalProvisioning         persistentvolumeclaim/csi-hostpathkvbbt                                     waiting for a volume to be created, either by external provisioner \"csi-hostpath-volume-1580\" or manually created by system administrator\nvolume-1580                          5m19s       Normal    Provisioning                 persistentvolumeclaim/csi-hostpathkvbbt                                     External provisioner is provisioning volume for claim \"volume-1580/csi-hostpathkvbbt\"\nvolume-1580                          5m19s       Normal    ProvisioningSucceeded        persistentvolumeclaim/csi-hostpathkvbbt                                     Successfully provisioned volume pvc-93d92240-494f-483f-9827-5df5b6037656\nvolume-1580                          5m10s       Normal    Pulled                       pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nvolume-1580                          5m10s       Normal    Created                      pod/csi-hostpathplugin-0                                                    Created container node-driver-registrar\nvolume-1580                          5m6s        Normal    Started                      pod/csi-hostpathplugin-0                                                    Started container node-driver-registrar\nvolume-1580                          5m48s       Normal    Pulling                      pod/csi-hostpathplugin-0                                                    Pulling image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nvolume-1580                          5m28s       Normal    Pulled                       pod/csi-hostpathplugin-0                                                    Successfully pulled image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nvolume-1580                          5m28s       Normal    Created                      pod/csi-hostpathplugin-0                                                    Created container hostpath\nvolume-1580                          5m22s       Normal    Started                      pod/csi-hostpathplugin-0                                                    Started container hostpath\nvolume-1580                          5m22s       Normal    Pulling                      pod/csi-hostpathplugin-0                                                    Pulling image \"quay.io/k8scsi/livenessprobe:v1.1.0\"\nvolume-1580                          5m18s       Normal    Pulled                       pod/csi-hostpathplugin-0                                                    Successfully pulled image \"quay.io/k8scsi/livenessprobe:v1.1.0\"\nvolume-1580                          5m18s       Normal    Created                      pod/csi-hostpathplugin-0                                                    Created container liveness-probe\nvolume-1580                          5m15s       Normal    Started                      pod/csi-hostpathplugin-0                                                    Started container liveness-probe\nvolume-1580                          2m55s       Normal    Killing                      pod/csi-hostpathplugin-0                                                    Stopping container hostpath\nvolume-1580                          2m55s       Normal    Killing                      pod/csi-hostpathplugin-0                                                    Stopping container node-driver-registrar\nvolume-1580                          2m55s       Normal    Killing                      pod/csi-hostpathplugin-0                                                    Stopping container liveness-probe\nvolume-1580                          2m50s       Warning   Unhealthy                    pod/csi-hostpathplugin-0                                                    Liveness probe failed: Get http://10.64.1.217:9898/healthz: dial tcp 10.64.1.217:9898: connect: connection refused\nvolume-1580                          5m54s       Normal    SuccessfulCreate             statefulset/csi-hostpathplugin                                              create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nvolume-1580                          5m46s       Normal    Pulling                      pod/csi-snapshotter-0                                                       Pulling image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\"\nvolume-1580                          5m29s       Normal    Pulled                       pod/csi-snapshotter-0                                                       Successfully pulled image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\"\nvolume-1580                          5m28s       Normal    Created                      pod/csi-snapshotter-0                                                       Created container csi-snapshotter\nvolume-1580                          5m22s       Normal    Started                      pod/csi-snapshotter-0                                                       Started container csi-snapshotter\nvolume-1580                          5m50s       Warning   FailedCreate                 statefulset/csi-snapshotter                                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter failed error: pods \"csi-snapshotter-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-1580                          5m50s       Normal    SuccessfulCreate             statefulset/csi-snapshotter                                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nvolume-1580                          4m8s        Normal    SuccessfulAttachVolume       pod/hostpath-client                                                         AttachVolume.Attach succeeded for volume \"pvc-93d92240-494f-483f-9827-5df5b6037656\"\nvolume-1580                          3m56s       Normal    Pulled                       pod/hostpath-client                                                         Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-1580                          3m56s       Normal    Created                      pod/hostpath-client                                                         Created container hostpath-client\nvolume-1580                          3m53s       Normal    Started                      pod/hostpath-client                                                         Started container hostpath-client\nvolume-1580                          3m38s       Normal    Killing                      pod/hostpath-client                                                         Stopping container hostpath-client\nvolume-1580                          5m15s       Normal    SuccessfulAttachVolume       pod/hostpath-injector                                                       AttachVolume.Attach succeeded for volume \"pvc-93d92240-494f-483f-9827-5df5b6037656\"\nvolume-1580                          5m7s        Warning   FailedMount                  pod/hostpath-injector                                                       MountVolume.MountDevice failed for volume \"pvc-93d92240-494f-483f-9827-5df5b6037656\" : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name csi-hostpath-volume-1580 not found in the list of registered CSI drivers\nvolume-1580                          4m57s       Normal    Pulled                       pod/hostpath-injector                                                       Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-1580                          4m57s       Normal    Created                      pod/hostpath-injector                                                       Created container hostpath-injector\nvolume-1580                          4m54s       Normal    Started                      pod/hostpath-injector                                                       Started container hostpath-injector\nvolume-1580                          4m26s       Normal    Killing                      pod/hostpath-injector                                                       Stopping container hostpath-injector\nvolume-3406                          3s          Normal    Scheduled                    pod/external-provisioner-68p2z                                              Successfully assigned volume-3406/external-provisioner-68p2z to bootstrap-e2e-minion-group-7htw\nvolume-3767                          2m48s       Normal    Scheduled                    pod/exec-volume-test-inlinevolume-lm4p                                      Successfully assigned volume-3767/exec-volume-test-inlinevolume-lm4p to bootstrap-e2e-minion-group-5wn8\nvolume-3767                          2m41s       Normal    SuccessfulAttachVolume       pod/exec-volume-test-inlinevolume-lm4p                                      AttachVolume.Attach succeeded for volume \"vol1\"\nvolume-3767                          2m31s       Normal    Pulled                       pod/exec-volume-test-inlinevolume-lm4p                                      Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nvolume-3767                          2m31s       Normal    Created                      pod/exec-volume-test-inlinevolume-lm4p                                      Created container exec-container-inlinevolume-lm4p\nvolume-3767                          2m28s       Normal    Started                      pod/exec-volume-test-inlinevolume-lm4p                                      Started container exec-container-inlinevolume-lm4p\nvolume-3852                          5m59s       Normal    Pulled                       pod/hostpath-client                                                         Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-3852                          5m59s       Normal    Created                      pod/hostpath-client                                                         Created container hostpath-client\nvolume-3852                          5m59s       Normal    Started                      pod/hostpath-client                                                         Started container hostpath-client\nvolume-3852                          5m47s       Normal    Killing                      pod/hostpath-client                                                         Stopping container hostpath-client\nvolume-3852                          6m25s       Normal    Pulled                       pod/hostpath-injector                                                       Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-3852                          6m25s       Normal    Created                      pod/hostpath-injector                                                       Created container hostpath-injector\nvolume-3852                          6m26s       Normal    Started                      pod/hostpath-injector                                                       Started container hostpath-injector\nvolume-3852                          6m14s       Normal    Killing                      pod/hostpath-injector                                                       Stopping container hostpath-injector\nvolume-3920                          3m51s       Normal    Pulled                       pod/exec-volume-test-preprovisionedpv-f27q                                  Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nvolume-3920                          3m51s       Normal    Created                      pod/exec-volume-test-preprovisionedpv-f27q                                  Created container exec-container-preprovisionedpv-f27q\nvolume-3920                          3m50s       Normal    Started                      pod/exec-volume-test-preprovisionedpv-f27q                                  Started container exec-container-preprovisionedpv-f27q\nvolume-3920                          4m13s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-5wn8-wx2cp                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolume-3920                          4m13s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-wx2cp                          Created container agnhost\nvolume-3920                          4m12s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-wx2cp                          Started container agnhost\nvolume-3920                          3m44s       Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-wx2cp                          Stopping container agnhost\nvolume-3920                          4m7s        Warning   ProvisioningFailed           persistentvolumeclaim/pvc-9xvvf                                             storageclass.storage.k8s.io \"volume-3920\" not found\nvolume-497                           2m40s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-7htw-7dps6                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolume-497                           2m39s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-7htw-7dps6                          Created container agnhost\nvolume-497                           2m38s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-7htw-7dps6                          Started container agnhost\nvolume-497                           55s         Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-7htw-7dps6                          Stopping container agnhost\nvolume-497                           96s         Normal    Pulled                       pod/local-client                                                            Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-497                           95s         Normal    Created                      pod/local-client                                                            Created container local-client\nvolume-497                           94s         Normal    Started                      pod/local-client                                                            Started container local-client\nvolume-497                           73s         Normal    Killing                      pod/local-client                                                            Stopping container local-client\nvolume-497                           2m20s       Normal    Pulled                       pod/local-injector                                                          Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-497                           2m20s       Normal    Created                      pod/local-injector                                                          Created container local-injector\nvolume-497                           2m19s       Normal    Started                      pod/local-injector                                                          Started container local-injector\nvolume-497                           115s        Normal    Killing                      pod/local-injector                                                          Stopping container local-injector\nvolume-497                           2m29s       Warning   ProvisioningFailed           persistentvolumeclaim/pvc-b5llq                                             storageclass.storage.k8s.io \"volume-497\" not found\nvolume-5343                          6m3s        Normal    Scheduled                    pod/exec-volume-test-inlinevolume-n2bt                                      Successfully assigned volume-5343/exec-volume-test-inlinevolume-n2bt to bootstrap-e2e-minion-group-1s6w\nvolume-5343                          5m57s       Normal    SuccessfulAttachVolume       pod/exec-volume-test-inlinevolume-n2bt                                      AttachVolume.Attach succeeded for volume \"vol1\"\nvolume-5343                          5m50s       Normal    Pulled                       pod/exec-volume-test-inlinevolume-n2bt                                      Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nvolume-5343                          5m50s       Normal    Created                      pod/exec-volume-test-inlinevolume-n2bt                                      Created container exec-container-inlinevolume-n2bt\nvolume-5343                          5m50s       Normal    Started                      pod/exec-volume-test-inlinevolume-n2bt                                      Started container exec-container-inlinevolume-n2bt\nvolume-5657                          3m35s       Normal    Scheduled                    pod/exec-volume-test-preprovisionedpv-gp2d                                  Successfully assigned volume-5657/exec-volume-test-preprovisionedpv-gp2d to bootstrap-e2e-minion-group-5wn8\nvolume-5657                          3m29s       Normal    SuccessfulAttachVolume       pod/exec-volume-test-preprovisionedpv-gp2d                                  AttachVolume.Attach succeeded for volume \"gcepd-kn529\"\nvolume-5657                          3m23s       Normal    Pulled                       pod/exec-volume-test-preprovisionedpv-gp2d                                  Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nvolume-5657                          3m23s       Normal    Created                      pod/exec-volume-test-preprovisionedpv-gp2d                                  Created container exec-container-preprovisionedpv-gp2d\nvolume-5657                          3m22s       Normal    Started                      pod/exec-volume-test-preprovisionedpv-gp2d                                  Started container exec-container-preprovisionedpv-gp2d\nvolume-5657                          3m45s       Warning   ProvisioningFailed           persistentvolumeclaim/pvc-fr4h4                                             storageclass.storage.k8s.io \"volume-5657\" not found\nvolume-6857                          2m37s       Normal    Pulled                       pod/exec-volume-test-preprovisionedpv-sn42                                  Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nvolume-6857                          2m36s       Normal    Created                      pod/exec-volume-test-preprovisionedpv-sn42                                  Created container exec-container-preprovisionedpv-sn42\nvolume-6857                          2m36s       Normal    Started                      pod/exec-volume-test-preprovisionedpv-sn42                                  Started container exec-container-preprovisionedpv-sn42\nvolume-6857                          2m47s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-dwjn-nvv89                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolume-6857                          2m47s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-dwjn-nvv89                          Created container agnhost\nvolume-6857                          2m47s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-dwjn-nvv89                          Started container agnhost\nvolume-6857                          2m29s       Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-dwjn-nvv89                          Stopping container agnhost\nvolume-6857                          2m43s       Warning   ProvisioningFailed           persistentvolumeclaim/pvc-v77l7                                             storageclass.storage.k8s.io \"volume-6857\" not found\nvolume-6939                          3m9s        Normal    Scheduled                    pod/configmap-client                                                        Successfully assigned volume-6939/configmap-client to bootstrap-e2e-minion-group-5wn8\nvolume-6939                          3m7s        Normal    Pulled                       pod/configmap-client                                                        Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-6939                          3m7s        Normal    Created                      pod/configmap-client                                                        Created container configmap-client\nvolume-6939                          3m7s        Normal    Started                      pod/configmap-client                                                        Started container configmap-client\nvolume-6939                          2m48s       Normal    Killing                      pod/configmap-client                                                        Stopping container configmap-client\nvolume-6959                          5m38s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-5wn8-8245l                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolume-6959                          5m37s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-8245l                          Created container agnhost\nvolume-6959                          5m37s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-8245l                          Started container agnhost\nvolume-6959                          4m35s       Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-8245l                          Stopping container agnhost\nvolume-6959                          4m59s       Normal    Pulled                       pod/local-client                                                            Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-6959                          4m58s       Normal    Created                      pod/local-client                                                            Created container local-client\nvolume-6959                          4m57s       Normal    Started                      pod/local-client                                                            Started container local-client\nvolume-6959                          4m49s       Normal    Killing                      pod/local-client                                                            Stopping container local-client\nvolume-6959                          5m21s       Normal    Pulled                       pod/local-injector                                                          Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-6959                          5m21s       Normal    Created                      pod/local-injector                                                          Created container local-injector\nvolume-6959                          5m21s       Normal    Started                      pod/local-injector                                                          Started container local-injector\nvolume-6959                          5m10s       Normal    Killing                      pod/local-injector                                                          Stopping container local-injector\nvolume-6959                          5m31s       Warning   ProvisioningFailed           persistentvolumeclaim/pvc-d748k                                             storageclass.storage.k8s.io \"volume-6959\" not found\nvolume-7455                          100s        Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-1s6w-ntvzj                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolume-7455                          100s        Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-ntvzj                          Created container agnhost\nvolume-7455                          100s        Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-ntvzj                          Started container agnhost\nvolume-7455                          31s         Normal    Pulled                       pod/local-client                                                            Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-7455                          30s         Normal    Created                      pod/local-client                                                            Created container local-client\nvolume-7455                          28s         Normal    Started                      pod/local-client                                                            Started container local-client\nvolume-7455                          17s         Normal    Killing                      pod/local-client                                                            Stopping container local-client\nvolume-7455                          74s         Normal    Pulled                       pod/local-injector                                                          Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-7455                          74s         Normal    Created                      pod/local-injector                                                          Created container local-injector\nvolume-7455                          72s         Normal    Started                      pod/local-injector                                                          Started container local-injector\nvolume-7455                          53s         Normal    Killing                      pod/local-injector                                                          Stopping container local-injector\nvolume-7455                          95s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-vtq9k                                             storageclass.storage.k8s.io \"volume-7455\" not found\nvolume-7729                          70s         Normal    Scheduled                    pod/exec-volume-test-inlinevolume-nmcf                                      Successfully assigned volume-7729/exec-volume-test-inlinevolume-nmcf to bootstrap-e2e-minion-group-7htw\nvolume-7729                          67s         Normal    Pulled                       pod/exec-volume-test-inlinevolume-nmcf                                      Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nvolume-7729                          67s         Normal    Created                      pod/exec-volume-test-inlinevolume-nmcf                                      Created container exec-container-inlinevolume-nmcf\nvolume-7729                          67s         Normal    Started                      pod/exec-volume-test-inlinevolume-nmcf                                      Started container exec-container-inlinevolume-nmcf\nvolume-806                           45s         Normal    Scheduled                    pod/emptydir-injector                                                       Successfully assigned volume-806/emptydir-injector to bootstrap-e2e-minion-group-7htw\nvolume-806                           40s         Normal    Pulled                       pod/emptydir-injector                                                       Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-806                           40s         Normal    Created                      pod/emptydir-injector                                                       Created container emptydir-injector\nvolume-806                           39s         Normal    Started                      pod/emptydir-injector                                                       Started container emptydir-injector\nvolume-806                           17s         Normal    Killing                      pod/emptydir-injector                                                       Stopping container emptydir-injector\nvolume-9031                          5m53s       Normal    Pulled                       pod/hostpath-symlink-prep-volume-9031                                       Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-9031                          5m52s       Normal    Created                      pod/hostpath-symlink-prep-volume-9031                                       Created container init-volume-volume-9031\nvolume-9031                          5m52s       Normal    Started                      pod/hostpath-symlink-prep-volume-9031                                       Started container init-volume-volume-9031\nvolume-9031                          4m57s       Normal    Pulled                       pod/hostpath-symlink-prep-volume-9031                                       Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-9031                          4m57s       Normal    Created                      pod/hostpath-symlink-prep-volume-9031                                       Created container init-volume-volume-9031\nvolume-9031                          4m57s       Normal    Started                      pod/hostpath-symlink-prep-volume-9031                                       Started container init-volume-volume-9031\nvolume-9031                          5m18s       Normal    Pulled                       pod/hostpathsymlink-client                                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-9031                          5m18s       Normal    Created                      pod/hostpathsymlink-client                                                  Created container hostpathsymlink-client\nvolume-9031                          5m18s       Normal    Started                      pod/hostpathsymlink-client                                                  Started container hostpathsymlink-client\nvolume-9031                          5m6s        Normal    Killing                      pod/hostpathsymlink-client                                                  Stopping container hostpathsymlink-client\nvolume-9031                          5m42s       Normal    Pulled                       pod/hostpathsymlink-injector                                                Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-9031                          5m42s       Normal    Created                      pod/hostpathsymlink-injector                                                Created container hostpathsymlink-injector\nvolume-9031                          5m41s       Normal    Started                      pod/hostpathsymlink-injector                                                Started container hostpathsymlink-injector\nvolume-9031                          5m26s       Normal    Killing                      pod/hostpathsymlink-injector                                                Stopping container hostpathsymlink-injector\nvolume-9110                          5m30s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-7htw-r7cjc                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolume-9110                          5m29s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-7htw-r7cjc                          Created container agnhost\nvolume-9110                          5m23s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-7htw-r7cjc                          Started container agnhost\nvolume-9110                          3m1s        Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-7htw-r7cjc                          Stopping container agnhost\nvolume-9110                          3m57s       Normal    Pulled                       pod/local-client                                                            Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-9110                          3m57s       Normal    Created                      pod/local-client                                                            Created container local-client\nvolume-9110                          3m54s       Normal    Started                      pod/local-client                                                            Started container local-client\nvolume-9110                          3m40s       Normal    Killing                      pod/local-client                                                            Stopping container local-client\nvolume-9110                          4m48s       Normal    Pulled                       pod/local-injector                                                          Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-9110                          4m47s       Normal    Created                      pod/local-injector                                                          Created container local-injector\nvolume-9110                          4m45s       Normal    Started                      pod/local-injector                                                          Started container local-injector\nvolume-9110                          4m22s       Normal    Killing                      pod/local-injector                                                          Stopping container local-injector\nvolume-9110                          5m7s        Warning   ProvisioningFailed           persistentvolumeclaim/pvc-hr7q2                                             storageclass.storage.k8s.io \"volume-9110\" not found\nvolume-945                           23s         Normal    Scheduled                    pod/exec-volume-test-dynamicpv-rs44                                         Successfully assigned volume-945/exec-volume-test-dynamicpv-rs44 to bootstrap-e2e-minion-group-5wn8\nvolume-945                           17s         Normal    SuccessfulAttachVolume       pod/exec-volume-test-dynamicpv-rs44                                         AttachVolume.Attach succeeded for volume \"pvc-6dfba1bc-e6aa-4292-a568-d32047444e67\"\nvolume-945                           8s          Normal    Pulled                       pod/exec-volume-test-dynamicpv-rs44                                         Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nvolume-945                           8s          Normal    Created                      pod/exec-volume-test-dynamicpv-rs44                                         Created container exec-container-dynamicpv-rs44\nvolume-945                           8s          Normal    Started                      pod/exec-volume-test-dynamicpv-rs44                                         Started container exec-container-dynamicpv-rs44\nvolume-945                           26s         Normal    WaitForFirstConsumer         persistentvolumeclaim/gcepdx9qs4                                            waiting for first consumer to be created before binding\nvolume-945                           23s         Normal    ProvisioningSucceeded        persistentvolumeclaim/gcepdx9qs4                                            Successfully provisioned volume pvc-6dfba1bc-e6aa-4292-a568-d32047444e67 using kubernetes.io/gce-pd\nvolume-9478                          16s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-5wn8-s6676                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolume-9478                          16s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-s6676                          Created container agnhost\nvolume-9478                          16s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-s6676                          Started container agnhost\nvolume-9478                          3s          Normal    Pulled                       pod/local-injector                                                          Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-9478                          3s          Normal    Created                      pod/local-injector                                                          Created container local-injector\nvolume-9998                          4m12s       Normal    Scheduled                    pod/gcepd-client                                                            Successfully assigned volume-9998/gcepd-client to bootstrap-e2e-minion-group-5wn8\nvolume-9998                          4m12s       Warning   FailedMount                  pod/gcepd-client                                                            Unable to attach or mount volumes: unmounted volumes=[gcepd-volume-0 default-token-r47qd], unattached volumes=[gcepd-volume-0 default-token-r47qd]: error processing PVC volume-9998/gcepdcgpsc: failed to fetch PVC from API server: persistentvolumeclaims \"gcepdcgpsc\" is forbidden: User \"system:node:bootstrap-e2e-minion-group-5wn8\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"volume-9998\": no relationship found between node \"bootstrap-e2e-minion-group-5wn8\" and this object\nvolume-9998                          4m11s       Warning   FailedMount                  pod/gcepd-client                                                            MountVolume.SetUp failed for volume \"default-token-r47qd\" : failed to sync secret cache: timed out waiting for the condition\nvolume-9998                          3m57s       Normal    Pulled                       pod/gcepd-client                                                            Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-9998                          3m57s       Normal    Created                      pod/gcepd-client                                                            Created container gcepd-client\nvolume-9998                          3m57s       Normal    Started                      pod/gcepd-client                                                            Started container gcepd-client\nvolume-9998                          3m49s       Normal    Killing                      pod/gcepd-client                                                            Stopping container gcepd-client\nvolume-9998                          4m46s       Normal    Scheduled                    pod/gcepd-injector                                                          Successfully assigned volume-9998/gcepd-injector to bootstrap-e2e-minion-group-5wn8\nvolume-9998                          4m40s       Normal    SuccessfulAttachVolume       pod/gcepd-injector                                                          AttachVolume.Attach succeeded for volume \"pvc-49d10783-2905-4c21-a09c-525d62e66843\"\nvolume-9998                          4m31s       Normal    Pulled                       pod/gcepd-injector                                                          Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-9998                          4m31s       Normal    Created                      pod/gcepd-injector                                                          Created container gcepd-injector\nvolume-9998                          4m31s       Normal    Started                      pod/gcepd-injector                                                          Started container gcepd-injector\nvolume-9998                          4m19s       Normal    Killing                      pod/gcepd-injector                                                          Stopping container gcepd-injector\nvolume-9998                          4m51s       Normal    WaitForFirstConsumer         persistentvolumeclaim/gcepdcgpsc                                            waiting for first consumer to be created before binding\nvolume-9998                          4m48s       Normal    ProvisioningSucceeded        persistentvolumeclaim/gcepdcgpsc                                            Successfully provisioned volume pvc-49d10783-2905-4c21-a09c-525d62e66843 using kubernetes.io/gce-pd\nvolume-expand-3396                   3m24s       Normal    ProvisioningSucceeded        persistentvolumeclaim/gcepdxtkpt                                            Successfully provisioned volume pvc-062a7270-10c9-4df3-96e0-27edf84ced37 using kubernetes.io/gce-pd\nvolume-expand-3396                   2m22s       Normal    VolumeResizeSuccessful       persistentvolumeclaim/gcepdxtkpt                                            ExpandVolume succeeded for volume volume-expand-3396/gcepdxtkpt\nvolume-expand-3396                   3m22s       Normal    Scheduled                    pod/security-context-4b5d3833-19c7-4aae-bb4a-b8c032705e69                   Successfully assigned volume-expand-3396/security-context-4b5d3833-19c7-4aae-bb4a-b8c032705e69 to bootstrap-e2e-minion-group-7htw\nvolume-expand-3396                   3m16s       Normal    SuccessfulAttachVolume       pod/security-context-4b5d3833-19c7-4aae-bb4a-b8c032705e69                   AttachVolume.Attach succeeded for volume \"pvc-062a7270-10c9-4df3-96e0-27edf84ced37\"\nvolume-expand-3396                   3m4s        Normal    SuccessfulMountVolume        pod/security-context-4b5d3833-19c7-4aae-bb4a-b8c032705e69                   MapVolume.MapPodDevice succeeded for volume \"pvc-062a7270-10c9-4df3-96e0-27edf84ced37\" globalMapPath \"/var/lib/kubelet/plugins/kubernetes.io/gce-pd/volumeDevices/bootstrap-e2e-dynamic-pvc-062a7270-10c9-4df3-96e0-27edf84ced37\"\nvolume-expand-3396                   3m4s        Normal    SuccessfulMountVolume        pod/security-context-4b5d3833-19c7-4aae-bb4a-b8c032705e69                   MapVolume.MapPodDevice succeeded for volume \"pvc-062a7270-10c9-4df3-96e0-27edf84ced37\" volumeMapPath \"/var/lib/kubelet/pods/f58f187f-a6f6-4c87-bedc-5fda0abc8b5b/volumeDevices/kubernetes.io~gce-pd\"\nvolume-expand-3396                   2m58s       Normal    Pulled                       pod/security-context-4b5d3833-19c7-4aae-bb4a-b8c032705e69                   Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-expand-3396                   2m58s       Normal    Created                      pod/security-context-4b5d3833-19c7-4aae-bb4a-b8c032705e69                   Created container write-pod\nvolume-expand-3396                   2m56s       Normal    Started                      pod/security-context-4b5d3833-19c7-4aae-bb4a-b8c032705e69                   Started container write-pod\nvolume-expand-3396                   2m45s       Normal    Killing                      pod/security-context-4b5d3833-19c7-4aae-bb4a-b8c032705e69                   Stopping container write-pod\nvolume-expand-3396                   2m19s       Normal    Scheduled                    pod/security-context-5049d644-88cc-4de8-9679-204f81c3eaef                   Successfully assigned volume-expand-3396/security-context-5049d644-88cc-4de8-9679-204f81c3eaef to bootstrap-e2e-minion-group-7htw\nvolume-expand-3396                   2m19s       Warning   FailedMount                  pod/security-context-5049d644-88cc-4de8-9679-204f81c3eaef                   Unable to attach or mount volumes: unmounted volumes=[default-token-rhcz9 volume1], unattached volumes=[default-token-rhcz9 volume1]: error processing PVC volume-expand-3396/gcepdxtkpt: failed to fetch PVC from API server: persistentvolumeclaims \"gcepdxtkpt\" is forbidden: User \"system:node:bootstrap-e2e-minion-group-7htw\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"volume-expand-3396\": no relationship found between node \"bootstrap-e2e-minion-group-7htw\" and this object\nvolume-expand-3396                   2m18s       Warning   FailedMount                  pod/security-context-5049d644-88cc-4de8-9679-204f81c3eaef                   MountVolume.SetUp failed for volume \"default-token-rhcz9\" : failed to sync secret cache: timed out waiting for the condition\nvolume-expand-3396                   2m13s       Normal    SuccessfulAttachVolume       pod/security-context-5049d644-88cc-4de8-9679-204f81c3eaef                   AttachVolume.Attach succeeded for volume \"pvc-062a7270-10c9-4df3-96e0-27edf84ced37\"\nvolume-expand-3396                   2m1s        Normal    SuccessfulMountVolume        pod/security-context-5049d644-88cc-4de8-9679-204f81c3eaef                   MapVolume.MapPodDevice succeeded for volume \"pvc-062a7270-10c9-4df3-96e0-27edf84ced37\" globalMapPath \"/var/lib/kubelet/plugins/kubernetes.io/gce-pd/volumeDevices/bootstrap-e2e-dynamic-pvc-062a7270-10c9-4df3-96e0-27edf84ced37\"\nvolume-expand-3396                   2m1s        Normal    SuccessfulMountVolume        pod/security-context-5049d644-88cc-4de8-9679-204f81c3eaef                   MapVolume.MapPodDevice succeeded for volume \"pvc-062a7270-10c9-4df3-96e0-27edf84ced37\" volumeMapPath \"/var/lib/kubelet/pods/ede75b96-5ea5-478c-a92a-986bcb82b9f0/volumeDevices/kubernetes.io~gce-pd\"\nvolume-expand-3396                   117s        Normal    Pulled                       pod/security-context-5049d644-88cc-4de8-9679-204f81c3eaef                   Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-expand-3396                   117s        Normal    Created                      pod/security-context-5049d644-88cc-4de8-9679-204f81c3eaef                   Created container write-pod\nvolume-expand-3396                   115s        Normal    Started                      pod/security-context-5049d644-88cc-4de8-9679-204f81c3eaef                   Started container write-pod\nvolume-expand-3396                   108s        Normal    Killing                      pod/security-context-5049d644-88cc-4de8-9679-204f81c3eaef                   Stopping container write-pod\nvolume-expand-5571                   28s         Normal    WaitForFirstConsumer         persistentvolumeclaim/gcepd4d2vw                                            waiting for first consumer to be created before binding\nvolume-expand-5571                   24s         Normal    ProvisioningSucceeded        persistentvolumeclaim/gcepd4d2vw                                            Successfully provisioned volume pvc-73053dc2-fe36-4b60-999d-661d92ca6aed using kubernetes.io/gce-pd\nvolume-expand-5571                   22s         Normal    Scheduled                    pod/security-context-f6f3e01d-70bc-448c-bd20-fbfe8a36fcba                   Successfully assigned volume-expand-5571/security-context-f6f3e01d-70bc-448c-bd20-fbfe8a36fcba to bootstrap-e2e-minion-group-7htw\nvolume-expand-5571                   16s         Normal    SuccessfulAttachVolume       pod/security-context-f6f3e01d-70bc-448c-bd20-fbfe8a36fcba                   AttachVolume.Attach succeeded for volume \"pvc-73053dc2-fe36-4b60-999d-661d92ca6aed\"\nvolume-expand-5571                   8s          Normal    Pulled                       pod/security-context-f6f3e01d-70bc-448c-bd20-fbfe8a36fcba                   Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-expand-5571                   7s          Normal    Created                      pod/security-context-f6f3e01d-70bc-448c-bd20-fbfe8a36fcba                   Created container write-pod\nvolume-expand-5571                   7s          Normal    Started                      pod/security-context-f6f3e01d-70bc-448c-bd20-fbfe8a36fcba                   Started container write-pod\nvolume-expand-6580                   4m33s       Normal    Pulled                       pod/csi-hostpath-attacher-0                                                 Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\nvolume-expand-6580                   4m32s       Normal    Created                      pod/csi-hostpath-attacher-0                                                 Created container csi-attacher\nvolume-expand-6580                   4m31s       Normal    Started                      pod/csi-hostpath-attacher-0                                                 Started container csi-attacher\nvolume-expand-6580                   2m42s       Normal    Killing                      pod/csi-hostpath-attacher-0                                                 Stopping container csi-attacher\nvolume-expand-6580                   4m40s       Warning   FailedCreate                 statefulset/csi-hostpath-attacher                                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-6580                   4m37s       Normal    SuccessfulCreate             statefulset/csi-hostpath-attacher                                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nvolume-expand-6580                   4m34s       Normal    Pulled                       pod/csi-hostpath-provisioner-0                                              Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nvolume-expand-6580                   4m34s       Normal    Created                      pod/csi-hostpath-provisioner-0                                              Created container csi-provisioner\nvolume-expand-6580                   4m31s       Normal    Started                      pod/csi-hostpath-provisioner-0                                              Started container csi-provisioner\nvolume-expand-6580                   2m42s       Normal    Killing                      pod/csi-hostpath-provisioner-0                                              Stopping container csi-provisioner\nvolume-expand-6580                   4m40s       Warning   FailedCreate                 statefulset/csi-hostpath-provisioner                                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-6580                   4m40s       Normal    SuccessfulCreate             statefulset/csi-hostpath-provisioner                                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nvolume-expand-6580                   4m33s       Normal    Pulled                       pod/csi-hostpath-resizer-0                                                  Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\nvolume-expand-6580                   4m33s       Normal    Created                      pod/csi-hostpath-resizer-0                                                  Created container csi-resizer\nvolume-expand-6580                   4m31s       Normal    Started                      pod/csi-hostpath-resizer-0                                                  Started container csi-resizer\nvolume-expand-6580                   2m41s       Normal    Killing                      pod/csi-hostpath-resizer-0                                                  Stopping container csi-resizer\nvolume-expand-6580                   4m40s       Warning   FailedCreate                 statefulset/csi-hostpath-resizer                                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-6580                   4m39s       Normal    SuccessfulCreate             statefulset/csi-hostpath-resizer                                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nvolume-expand-6580                   4m39s       Normal    ExternalProvisioning         persistentvolumeclaim/csi-hostpathjnjbd                                     waiting for a volume to be created, either by external provisioner \"csi-hostpath-volume-expand-6580\" or manually created by system administrator\nvolume-expand-6580                   4m31s       Normal    Provisioning                 persistentvolumeclaim/csi-hostpathjnjbd                                     External provisioner is provisioning volume for claim \"volume-expand-6580/csi-hostpathjnjbd\"\nvolume-expand-6580                   4m31s       Normal    ProvisioningSucceeded        persistentvolumeclaim/csi-hostpathjnjbd                                     Successfully provisioned volume pvc-2c91d87d-8124-4e05-8e9a-e9b55e77a35f\nvolume-expand-6580                   4m17s       Warning   ExternalExpanding            persistentvolumeclaim/csi-hostpathjnjbd                                     Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\nvolume-expand-6580                   4m16s       Normal    Resizing                     persistentvolumeclaim/csi-hostpathjnjbd                                     External resizer is resizing volume pvc-2c91d87d-8124-4e05-8e9a-e9b55e77a35f\nvolume-expand-6580                   4m16s       Normal    FileSystemResizeRequired     persistentvolumeclaim/csi-hostpathjnjbd                                     Require file system resize of volume on node\nvolume-expand-6580                   3m7s        Normal    FileSystemResizeSuccessful   persistentvolumeclaim/csi-hostpathjnjbd                                     MountVolume.NodeExpandVolume succeeded for volume \"pvc-2c91d87d-8124-4e05-8e9a-e9b55e77a35f\"\nvolume-expand-6580                   4m42s       Warning   FailedMount                  pod/csi-hostpathplugin-0                                                    MountVolume.SetUp failed for volume \"default-token-m7md4\" : failed to sync secret cache: timed out waiting for the condition\nvolume-expand-6580                   4m40s       Normal    Pulled                       pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nvolume-expand-6580                   4m40s       Normal    Created                      pod/csi-hostpathplugin-0                                                    Created container node-driver-registrar\nvolume-expand-6580                   4m40s       Normal    Started                      pod/csi-hostpathplugin-0                                                    Started container node-driver-registrar\nvolume-expand-6580                   4m40s       Normal    Pulled                       pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\" already present on machine\nvolume-expand-6580                   4m40s       Normal    Created                      pod/csi-hostpathplugin-0                                                    Created container hostpath\nvolume-expand-6580                   4m39s       Normal    Started                      pod/csi-hostpathplugin-0                                                    Started container hostpath\nvolume-expand-6580                   4m39s       Normal    Pulled                       pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/livenessprobe:v1.1.0\" already present on machine\nvolume-expand-6580                   4m39s       Normal    Created                      pod/csi-hostpathplugin-0                                                    Created container liveness-probe\nvolume-expand-6580                   4m36s       Normal    Started                      pod/csi-hostpathplugin-0                                                    Started container liveness-probe\nvolume-expand-6580                   2m42s       Normal    Killing                      pod/csi-hostpathplugin-0                                                    Stopping container node-driver-registrar\nvolume-expand-6580                   2m42s       Normal    Killing                      pod/csi-hostpathplugin-0                                                    Stopping container liveness-probe\nvolume-expand-6580                   2m42s       Normal    Killing                      pod/csi-hostpathplugin-0                                                    Stopping container hostpath\nvolume-expand-6580                   2m41s       Warning   Unhealthy                    pod/csi-hostpathplugin-0                                                    Liveness probe failed: Get http://10.64.2.222:9898/healthz: dial tcp 10.64.2.222:9898: connect: connection refused\nvolume-expand-6580                   4m43s       Normal    SuccessfulCreate             statefulset/csi-hostpathplugin                                              create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nvolume-expand-6580                   4m36s       Normal    Pulled                       pod/csi-snapshotter-0                                                       Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nvolume-expand-6580                   4m36s       Normal    Created                      pod/csi-snapshotter-0                                                       Created container csi-snapshotter\nvolume-expand-6580                   4m33s       Normal    Started                      pod/csi-snapshotter-0                                                       Started container csi-snapshotter\nvolume-expand-6580                   4m40s       Warning   FailedCreate                 statefulset/csi-snapshotter                                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter failed error: pods \"csi-snapshotter-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-6580                   4m40s       Normal    SuccessfulCreate             statefulset/csi-snapshotter                                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nvolume-expand-6580                   4m28s       Normal    SuccessfulAttachVolume       pod/security-context-f656e658-75ee-45a3-98fb-fbab447cb3ea                   AttachVolume.Attach succeeded for volume \"pvc-2c91d87d-8124-4e05-8e9a-e9b55e77a35f\"\nvolume-expand-6580                   4m21s       Normal    Pulled                       pod/security-context-f656e658-75ee-45a3-98fb-fbab447cb3ea                   Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-expand-6580                   4m21s       Normal    Created                      pod/security-context-f656e658-75ee-45a3-98fb-fbab447cb3ea                   Created container write-pod\nvolume-expand-6580                   4m20s       Normal    Started                      pod/security-context-f656e658-75ee-45a3-98fb-fbab447cb3ea                   Started container write-pod\nvolume-expand-6580                   3m7s        Normal    FileSystemResizeSuccessful   pod/security-context-f656e658-75ee-45a3-98fb-fbab447cb3ea                   MountVolume.NodeExpandVolume succeeded for volume \"pvc-2c91d87d-8124-4e05-8e9a-e9b55e77a35f\"\nvolume-expand-6580                   3m6s        Normal    Killing                      pod/security-context-f656e658-75ee-45a3-98fb-fbab447cb3ea                   Stopping container write-pod\nvolume-expand-7985                   5m25s       Normal    Pulled                       pod/csi-hostpath-attacher-0                                                 Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\nvolume-expand-7985                   5m25s       Normal    Created                      pod/csi-hostpath-attacher-0                                                 Created container csi-attacher\nvolume-expand-7985                   5m19s       Normal    Started                      pod/csi-hostpath-attacher-0                                                 Started container csi-attacher\nvolume-expand-7985                   3m24s       Normal    Killing                      pod/csi-hostpath-attacher-0                                                 Stopping container csi-attacher\nvolume-expand-7985                   5m41s       Warning   FailedCreate                 statefulset/csi-hostpath-attacher                                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-7985                   5m40s       Normal    SuccessfulCreate             statefulset/csi-hostpath-attacher                                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nvolume-expand-7985                   5m29s       Normal    Pulled                       pod/csi-hostpath-provisioner-0                                              Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nvolume-expand-7985                   5m29s       Normal    Created                      pod/csi-hostpath-provisioner-0                                              Created container csi-provisioner\nvolume-expand-7985                   5m22s       Normal    Started                      pod/csi-hostpath-provisioner-0                                              Started container csi-provisioner\nvolume-expand-7985                   3m21s       Normal    Killing                      pod/csi-hostpath-provisioner-0                                              Stopping container csi-provisioner\nvolume-expand-7985                   5m41s       Warning   FailedCreate                 statefulset/csi-hostpath-provisioner                                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-7985                   5m41s       Normal    SuccessfulCreate             statefulset/csi-hostpath-provisioner                                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nvolume-expand-7985                   5m26s       Normal    Pulled                       pod/csi-hostpath-resizer-0                                                  Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\nvolume-expand-7985                   5m26s       Normal    Created                      pod/csi-hostpath-resizer-0                                                  Created container csi-resizer\nvolume-expand-7985                   5m20s       Normal    Started                      pod/csi-hostpath-resizer-0                                                  Started container csi-resizer\nvolume-expand-7985                   3m20s       Normal    Killing                      pod/csi-hostpath-resizer-0                                                  Stopping container csi-resizer\nvolume-expand-7985                   5m41s       Warning   FailedCreate                 statefulset/csi-hostpath-resizer                                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-7985                   5m40s       Normal    SuccessfulCreate             statefulset/csi-hostpath-resizer                                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nvolume-expand-7985                   5m28s       Normal    ExternalProvisioning         persistentvolumeclaim/csi-hostpathh8zzv                                     waiting for a volume to be created, either by external provisioner \"csi-hostpath-volume-expand-7985\" or manually created by system administrator\nvolume-expand-7985                   5m17s       Normal    Provisioning                 persistentvolumeclaim/csi-hostpathh8zzv                                     External provisioner is provisioning volume for claim \"volume-expand-7985/csi-hostpathh8zzv\"\nvolume-expand-7985                   5m17s       Normal    ProvisioningSucceeded        persistentvolumeclaim/csi-hostpathh8zzv                                     Successfully provisioned volume pvc-d2cb38df-2047-4aeb-ba0b-15d1f5a9904f\nvolume-expand-7985                   4m34s       Warning   ExternalExpanding            persistentvolumeclaim/csi-hostpathh8zzv                                     Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\nvolume-expand-7985                   4m33s       Normal    Resizing                     persistentvolumeclaim/csi-hostpathh8zzv                                     External resizer is resizing volume pvc-d2cb38df-2047-4aeb-ba0b-15d1f5a9904f\nvolume-expand-7985                   4m32s       Normal    FileSystemResizeRequired     persistentvolumeclaim/csi-hostpathh8zzv                                     Require file system resize of volume on node\nvolume-expand-7985                   4m25s       Normal    FileSystemResizeSuccessful   persistentvolumeclaim/csi-hostpathh8zzv                                     MountVolume.NodeExpandVolume succeeded for volume \"pvc-d2cb38df-2047-4aeb-ba0b-15d1f5a9904f\"\nvolume-expand-7985                   5m31s       Normal    Pulled                       pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nvolume-expand-7985                   5m29s       Normal    Created                      pod/csi-hostpathplugin-0                                                    Created container node-driver-registrar\nvolume-expand-7985                   5m23s       Normal    Started                      pod/csi-hostpathplugin-0                                                    Started container node-driver-registrar\nvolume-expand-7985                   5m22s       Normal    Pulled                       pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\" already present on machine\nvolume-expand-7985                   5m22s       Normal    Created                      pod/csi-hostpathplugin-0                                                    Created container hostpath\nvolume-expand-7985                   5m18s       Normal    Started                      pod/csi-hostpathplugin-0                                                    Started container hostpath\nvolume-expand-7985                   5m18s       Normal    Pulled                       pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/livenessprobe:v1.1.0\" already present on machine\nvolume-expand-7985                   5m18s       Normal    Created                      pod/csi-hostpathplugin-0                                                    Created container liveness-probe\nvolume-expand-7985                   5m15s       Normal    Started                      pod/csi-hostpathplugin-0                                                    Started container liveness-probe\nvolume-expand-7985                   3m20s       Normal    Killing                      pod/csi-hostpathplugin-0                                                    Stopping container node-driver-registrar\nvolume-expand-7985                   3m20s       Normal    Killing                      pod/csi-hostpathplugin-0                                                    Stopping container liveness-probe\nvolume-expand-7985                   3m20s       Normal    Killing                      pod/csi-hostpathplugin-0                                                    Stopping container hostpath\nvolume-expand-7985                   5m42s       Normal    SuccessfulCreate             statefulset/csi-hostpathplugin                                              create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nvolume-expand-7985                   5m25s       Normal    Pulled                       pod/csi-snapshotter-0                                                       Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nvolume-expand-7985                   5m25s       Normal    Created                      pod/csi-snapshotter-0                                                       Created container csi-snapshotter\nvolume-expand-7985                   5m20s       Normal    Started                      pod/csi-snapshotter-0                                                       Started container csi-snapshotter\nvolume-expand-7985                   3m20s       Normal    Killing                      pod/csi-snapshotter-0                                                       Stopping container csi-snapshotter\nvolume-expand-7985                   5m40s       Normal    SuccessfulCreate             statefulset/csi-snapshotter                                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nvolume-expand-7985                   4m30s       Normal    SuccessfulAttachVolume       pod/security-context-6a29a094-ed13-4f90-ab55-5115fe107b7c                   AttachVolume.Attach succeeded for volume \"pvc-d2cb38df-2047-4aeb-ba0b-15d1f5a9904f\"\nvolume-expand-7985                   4m25s       Normal    FileSystemResizeSuccessful   pod/security-context-6a29a094-ed13-4f90-ab55-5115fe107b7c                   MountVolume.NodeExpandVolume succeeded for volume \"pvc-d2cb38df-2047-4aeb-ba0b-15d1f5a9904f\"\nvolume-expand-7985                   4m19s       Normal    Pulled                       pod/security-context-6a29a094-ed13-4f90-ab55-5115fe107b7c                   Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-expand-7985                   4m18s       Normal    Created                      pod/security-context-6a29a094-ed13-4f90-ab55-5115fe107b7c                   Created container write-pod\nvolume-expand-7985                   4m16s       Normal    Started                      pod/security-context-6a29a094-ed13-4f90-ab55-5115fe107b7c                   Started container write-pod\nvolume-expand-7985                   4m6s        Normal    Killing                      pod/security-context-6a29a094-ed13-4f90-ab55-5115fe107b7c                   Stopping container write-pod\nvolume-expand-7985                   5m14s       Normal    SuccessfulAttachVolume       pod/security-context-909c4d4d-fb6e-4e2e-9912-f46d0425bcba                   AttachVolume.Attach succeeded for volume \"pvc-d2cb38df-2047-4aeb-ba0b-15d1f5a9904f\"\nvolume-expand-7985                   4m58s       Normal    Pulled                       pod/security-context-909c4d4d-fb6e-4e2e-9912-f46d0425bcba                   Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-expand-7985                   4m58s       Normal    Created                      pod/security-context-909c4d4d-fb6e-4e2e-9912-f46d0425bcba                   Created container write-pod\nvolume-expand-7985                   4m55s       Normal    Started                      pod/security-context-909c4d4d-fb6e-4e2e-9912-f46d0425bcba                   Started container write-pod\nvolume-expand-7985                   4m44s       Normal    Killing                      pod/security-context-909c4d4d-fb6e-4e2e-9912-f46d0425bcba                   Stopping container write-pod\nvolume-expand-9570                   2m10s       Normal    Pulled                       pod/csi-hostpath-attacher-0                                                 Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\nvolume-expand-9570                   2m10s       Normal    Created                      pod/csi-hostpath-attacher-0                                                 Created container csi-attacher\nvolume-expand-9570                   2m10s       Normal    Started                      pod/csi-hostpath-attacher-0                                                 Started container csi-attacher\nvolume-expand-9570                   20s         Normal    Killing                      pod/csi-hostpath-attacher-0                                                 Stopping container csi-attacher\nvolume-expand-9570                   2m15s       Warning   FailedCreate                 statefulset/csi-hostpath-attacher                                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-9570                   2m12s       Normal    SuccessfulCreate             statefulset/csi-hostpath-attacher                                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nvolume-expand-9570                   17s         Normal    Pulled                       pod/csi-hostpath-provisioner-0                                              Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nvolume-expand-9570                   17s         Normal    Created                      pod/csi-hostpath-provisioner-0                                              Created container csi-provisioner\nvolume-expand-9570                   2m11s       Normal    Started                      pod/csi-hostpath-provisioner-0                                              Started container csi-provisioner\nvolume-expand-9570                   17s         Warning   FailedMount                  pod/csi-hostpath-provisioner-0                                              MountVolume.SetUp failed for volume \"csi-provisioner-token-fw5fq\" : secret \"csi-provisioner-token-fw5fq\" not found\nvolume-expand-9570                   16s         Warning   Failed                       pod/csi-hostpath-provisioner-0                                              Error: failed to start container \"csi-provisioner\": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused \"process_linux.go:430: container init caused \\\"rootfs_linux.go:58: mounting \\\\\\\"/var/lib/kubelet/pods/3ff1ac2b-6605-4923-9f32-26441cdb886e/volumes/kubernetes.io~secret/csi-provisioner-token-fw5fq\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/overlay2/f84f3485d14a5f282d0380130b236c54add138c2f0413637b3466fa124dac924/merged\\\\\\\" at \\\\\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\\\\\" caused \\\\\\\"stat /var/lib/kubelet/pods/3ff1ac2b-6605-4923-9f32-26441cdb886e/volumes/kubernetes.io~secret/csi-provisioner-token-fw5fq: no such file or directory\\\\\\\"\\\"\": unknown\nvolume-expand-9570                   2m16s       Warning   FailedCreate                 statefulset/csi-hostpath-provisioner                                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-9570                   2m15s       Normal    SuccessfulCreate             statefulset/csi-hostpath-provisioner                                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nvolume-expand-9570                   17s         Normal    Pulled                       pod/csi-hostpath-resizer-0                                                  Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\nvolume-expand-9570                   17s         Normal    Created                      pod/csi-hostpath-resizer-0                                                  Created container csi-resizer\nvolume-expand-9570                   16s         Normal    Started                      pod/csi-hostpath-resizer-0                                                  Started container csi-resizer\nvolume-expand-9570                   9s          Warning   FailedMount                  pod/csi-hostpath-resizer-0                                                  MountVolume.SetUp failed for volume \"csi-resizer-token-8ws7z\" : secret \"csi-resizer-token-8ws7z\" not found\nvolume-expand-9570                   2m15s       Warning   FailedCreate                 statefulset/csi-hostpath-resizer                                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-9570                   2m15s       Normal    SuccessfulCreate             statefulset/csi-hostpath-resizer                                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nvolume-expand-9570                   2m11s       Normal    ExternalProvisioning         persistentvolumeclaim/csi-hostpathm6xwv                                     waiting for a volume to be created, either by external provisioner \"csi-hostpath-volume-expand-9570\" or manually created by system administrator\nvolume-expand-9570                   2m10s       Normal    Provisioning                 persistentvolumeclaim/csi-hostpathm6xwv                                     External provisioner is provisioning volume for claim \"volume-expand-9570/csi-hostpathm6xwv\"\nvolume-expand-9570                   2m9s        Normal    ProvisioningSucceeded        persistentvolumeclaim/csi-hostpathm6xwv                                     Successfully provisioned volume pvc-fab4845f-525b-4e2c-b8b0-1c77726008da\nvolume-expand-9570                   109s        Warning   ExternalExpanding            persistentvolumeclaim/csi-hostpathm6xwv                                     Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\nvolume-expand-9570                   109s        Normal    Resizing                     persistentvolumeclaim/csi-hostpathm6xwv                                     External resizer is resizing volume pvc-fab4845f-525b-4e2c-b8b0-1c77726008da\nvolume-expand-9570                   108s        Normal    FileSystemResizeRequired     persistentvolumeclaim/csi-hostpathm6xwv                                     Require file system resize of volume on node\nvolume-expand-9570                   50s         Normal    FileSystemResizeSuccessful   persistentvolumeclaim/csi-hostpathm6xwv                                     MountVolume.NodeExpandVolume succeeded for volume \"pvc-fab4845f-525b-4e2c-b8b0-1c77726008da\"\nvolume-expand-9570                   2m18s       Normal    Pulled                       pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nvolume-expand-9570                   2m18s       Normal    Created                      pod/csi-hostpathplugin-0                                                    Created container node-driver-registrar\nvolume-expand-9570                   2m18s       Normal    Started                      pod/csi-hostpathplugin-0                                                    Started container node-driver-registrar\nvolume-expand-9570                   2m18s       Normal    Pulled                       pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\" already present on machine\nvolume-expand-9570                   2m18s       Normal    Created                      pod/csi-hostpathplugin-0                                                    Created container hostpath\nvolume-expand-9570                   2m17s       Normal    Started                      pod/csi-hostpathplugin-0                                                    Started container hostpath\nvolume-expand-9570                   2m17s       Normal    Pulled                       pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/livenessprobe:v1.1.0\" already present on machine\nvolume-expand-9570                   2m17s       Normal    Created                      pod/csi-hostpathplugin-0                                                    Created container liveness-probe\nvolume-expand-9570                   2m17s       Normal    Started                      pod/csi-hostpathplugin-0                                                    Started container liveness-probe\nvolume-expand-9570                   18s         Normal    Killing                      pod/csi-hostpathplugin-0                                                    Stopping container liveness-probe\nvolume-expand-9570                   18s         Normal    Killing                      pod/csi-hostpathplugin-0                                                    Stopping container node-driver-registrar\nvolume-expand-9570                   18s         Normal    Killing                      pod/csi-hostpathplugin-0                                                    Stopping container hostpath\nvolume-expand-9570                   18s         Warning   FailedPreStopHook            pod/csi-hostpathplugin-0                                                    Exec lifecycle hook ([/bin/sh -c rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock]) for Container \"node-driver-registrar\" in Pod \"csi-hostpathplugin-0_volume-expand-9570(6d14ff52-c3a1-4436-bd89-735844df78b8)\" failed - error: command '/bin/sh -c rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock' exited with 126: , message: \"OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused \\\"exec: \\\\\\\"/bin/sh\\\\\\\": stat /bin/sh: no such file or directory\\\": unknown\\r\\n\"\nvolume-expand-9570                   2m19s       Normal    SuccessfulCreate             statefulset/csi-hostpathplugin                                              create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nvolume-expand-9570                   17s         Normal    Pulled                       pod/csi-snapshotter-0                                                       Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nvolume-expand-9570                   17s         Normal    Created                      pod/csi-snapshotter-0                                                       Created container csi-snapshotter\nvolume-expand-9570                   16s         Normal    Started                      pod/csi-snapshotter-0                                                       Started container csi-snapshotter\nvolume-expand-9570                   9s          Warning   FailedMount                  pod/csi-snapshotter-0                                                       MountVolume.SetUp failed for volume \"csi-snapshotter-token-xswg8\" : secret \"csi-snapshotter-token-xswg8\" not found\nvolume-expand-9570                   2m15s       Warning   FailedCreate                 statefulset/csi-snapshotter                                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter failed error: pods \"csi-snapshotter-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-9570                   2m15s       Normal    SuccessfulCreate             statefulset/csi-snapshotter                                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nvolume-expand-9570                   2m6s        Normal    SuccessfulAttachVolume       pod/security-context-bc64e334-91e7-45f0-ab10-2732aa87dd77                   AttachVolume.Attach succeeded for volume \"pvc-fab4845f-525b-4e2c-b8b0-1c77726008da\"\nvolume-expand-9570                   117s        Normal    SuccessfulMountVolume        pod/security-context-bc64e334-91e7-45f0-ab10-2732aa87dd77                   MapVolume.MapPodDevice succeeded for volume \"pvc-fab4845f-525b-4e2c-b8b0-1c77726008da\" globalMapPath \"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/pvc-fab4845f-525b-4e2c-b8b0-1c77726008da/dev\"\nvolume-expand-9570                   117s        Normal    SuccessfulMountVolume        pod/security-context-bc64e334-91e7-45f0-ab10-2732aa87dd77                   MapVolume.MapPodDevice succeeded for volume \"pvc-fab4845f-525b-4e2c-b8b0-1c77726008da\" volumeMapPath \"/var/lib/kubelet/pods/de25777b-cd20-46f6-bbd8-047e0750d2b9/volumeDevices/kubernetes.io~csi\"\nvolume-expand-9570                   115s        Normal    Pulled                       pod/security-context-bc64e334-91e7-45f0-ab10-2732aa87dd77                   Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-expand-9570                   115s        Normal    Created                      pod/security-context-bc64e334-91e7-45f0-ab10-2732aa87dd77                   Created container write-pod\nvolume-expand-9570                   115s        Normal    Started                      pod/security-context-bc64e334-91e7-45f0-ab10-2732aa87dd77                   Started container write-pod\nvolume-expand-9570                   50s         Normal    FileSystemResizeSuccessful   pod/security-context-bc64e334-91e7-45f0-ab10-2732aa87dd77                   MountVolume.NodeExpandVolume succeeded for volume \"pvc-fab4845f-525b-4e2c-b8b0-1c77726008da\"\nvolume-expand-9570                   47s         Normal    Killing                      pod/security-context-bc64e334-91e7-45f0-ab10-2732aa87dd77                   Stopping container write-pod\nvolumemode-2832                      7m8s        Normal    Pulled                       pod/csi-hostpath-attacher-0                                                 Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\nvolumemode-2832                      7m8s        Normal    Created                      pod/csi-hostpath-attacher-0                                                 Created container csi-attacher\nvolumemode-2832                      7m6s        Normal    Started                      pod/csi-hostpath-attacher-0                                                 Started container csi-attacher\nvolumemode-2832                      5m56s       Normal    Killing                      pod/csi-hostpath-attacher-0                                                 Stopping container csi-attacher\nvolumemode-2832                      7m15s       Warning   FailedCreate                 statefulset/csi-hostpath-attacher                                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nvolumemode-2832                      7m13s       Normal    SuccessfulCreate             statefulset/csi-hostpath-attacher                                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nvolumemode-2832                      5m50s       Normal    Pulled                       pod/csi-hostpath-provisioner-0                                              Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nvolumemode-2832                      5m50s       Normal    Created                      pod/csi-hostpath-provisioner-0                                              Created container csi-provisioner\nvolumemode-2832                      7m7s        Normal    Started                      pod/csi-hostpath-provisioner-0                                              Started container csi-provisioner\nvolumemode-2832                      5m53s       Warning   FailedMount                  pod/csi-hostpath-provisioner-0                                              MountVolume.SetUp failed for volume \"csi-provisioner-token-25mwc\" : secret \"csi-provisioner-token-25mwc\" not found\nvolumemode-2832                      7m16s       Warning   FailedCreate                 statefulset/csi-hostpath-provisioner                                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nvolumemode-2832                      7m15s       Normal    SuccessfulCreate             statefulset/csi-hostpath-provisioner                                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nvolumemode-2832                      5m51s       Normal    Pulled                       pod/csi-hostpath-resizer-0                                                  Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\nvolumemode-2832                      5m50s       Normal    Created                      pod/csi-hostpath-resizer-0                                                  Created container csi-resizer\nvolumemode-2832                      5m48s       Normal    Started                      pod/csi-hostpath-resizer-0                                                  Started container csi-resizer\nvolumemode-2832                      5m47s       Warning   FailedMount                  pod/csi-hostpath-resizer-0                                                  MountVolume.SetUp failed for volume \"csi-resizer-token-v8bck\" : secret \"csi-resizer-token-v8bck\" not found\nvolumemode-2832                      7m16s       Warning   FailedCreate                 statefulset/csi-hostpath-resizer                                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nvolumemode-2832                      7m14s       Normal    SuccessfulCreate             statefulset/csi-hostpath-resizer                                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nvolumemode-2832                      7m13s       Normal    ExternalProvisioning         persistentvolumeclaim/csi-hostpath4dz9r                                     waiting for a volume to be created, either by external provisioner \"csi-hostpath-volumemode-2832\" or manually created by system administrator\nvolumemode-2832                      7m6s        Normal    Provisioning                 persistentvolumeclaim/csi-hostpath4dz9r                                     External provisioner is provisioning volume for claim \"volumemode-2832/csi-hostpath4dz9r\"\nvolumemode-2832                      7m5s        Normal    ProvisioningSucceeded        persistentvolumeclaim/csi-hostpath4dz9r                                     Successfully provisioned volume pvc-46874584-2bf8-4179-bbec-7a5a8bb64f06\nvolumemode-2832                      7m16s       Normal    Pulled                       pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nvolumemode-2832                      7m16s       Normal    Created                      pod/csi-hostpathplugin-0                                                    Created container node-driver-registrar\nvolumemode-2832                      7m15s       Normal    Started                      pod/csi-hostpathplugin-0                                                    Started container node-driver-registrar\nvolumemode-2832                      7m15s       Normal    Pulled                       pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\" already present on machine\nvolumemode-2832                      7m15s       Normal    Created                      pod/csi-hostpathplugin-0                                                    Created container hostpath\nvolumemode-2832                      7m13s       Normal    Started                      pod/csi-hostpathplugin-0                                                    Started container hostpath\nvolumemode-2832                      7m13s       Normal    Pulled                       pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/livenessprobe:v1.1.0\" already present on machine\nvolumemode-2832                      7m13s       Normal    Created                      pod/csi-hostpathplugin-0                                                    Created container liveness-probe\nvolumemode-2832                      7m10s       Normal    Started                      pod/csi-hostpathplugin-0                                                    Started container liveness-probe\nvolumemode-2832                      5m53s       Normal    Killing                      pod/csi-hostpathplugin-0                                                    Stopping container node-driver-registrar\nvolumemode-2832                      5m53s       Normal    Killing                      pod/csi-hostpathplugin-0                                                    Stopping container liveness-probe\nvolumemode-2832                      5m53s       Normal    Killing                      pod/csi-hostpathplugin-0                                                    Stopping container hostpath\nvolumemode-2832                      5m52s       Warning   FailedPreStopHook            pod/csi-hostpathplugin-0                                                    Exec lifecycle hook ([/bin/sh -c rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock]) for Container \"node-driver-registrar\" in Pod \"csi-hostpathplugin-0_volumemode-2832(9f28c066-f720-4a0e-bc06-d8993109d3b1)\" failed - error: command '/bin/sh -c rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock' exited with 126: , message: \"OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused \\\"exec: \\\\\\\"/bin/sh\\\\\\\": stat /bin/sh: no such file or directory\\\": unknown\\r\\n\"\nvolumemode-2832                      7m18s       Normal    SuccessfulCreate             statefulset/csi-hostpathplugin                                              create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nvolumemode-2832                      5m51s       Normal    Pulled                       pod/csi-snapshotter-0                                                       Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nvolumemode-2832                      5m51s       Normal    Created                      pod/csi-snapshotter-0                                                       Created container csi-snapshotter\nvolumemode-2832                      5m49s       Normal    Started                      pod/csi-snapshotter-0                                                       Started container csi-snapshotter\nvolumemode-2832                      5m43s       Warning   FailedMount                  pod/csi-snapshotter-0                                                       MountVolume.SetUp failed for volume \"csi-snapshotter-token-hplpt\" : secret \"csi-snapshotter-token-hplpt\" not found\nvolumemode-2832                      7m15s       Normal    SuccessfulCreate             statefulset/csi-snapshotter                                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nvolumemode-2832                      6m49s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-1s6w-ps7bv                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolumemode-2832                      6m49s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-ps7bv                          Created container agnhost\nvolumemode-2832                      6m48s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-ps7bv                          Started container agnhost\nvolumemode-2832                      6m34s       Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-1s6w-ps7bv                          Stopping container agnhost\nvolumemode-2832                      7m2s        Normal    Scheduled                    pod/security-context-538c4410-da88-406b-b9b5-118ad001f228                   Successfully assigned volumemode-2832/security-context-538c4410-da88-406b-b9b5-118ad001f228 to bootstrap-e2e-minion-group-1s6w\nvolumemode-2832                      7m1s        Normal    SuccessfulAttachVolume       pod/security-context-538c4410-da88-406b-b9b5-118ad001f228                   AttachVolume.Attach succeeded for volume \"pvc-46874584-2bf8-4179-bbec-7a5a8bb64f06\"\nvolumemode-2832                      6m59s       Normal    Pulled                       pod/security-context-538c4410-da88-406b-b9b5-118ad001f228                   Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolumemode-2832                      6m59s       Normal    Created                      pod/security-context-538c4410-da88-406b-b9b5-118ad001f228                   Created container write-pod\nvolumemode-2832                      6m57s       Normal    Started                      pod/security-context-538c4410-da88-406b-b9b5-118ad001f228                   Started container write-pod\nvolumemode-2832                      6m35s       Normal    Killing                      pod/security-context-538c4410-da88-406b-b9b5-118ad001f228                   Stopping container write-pod\nvolumemode-8073                      3m27s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-5wn8-g2gl5                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolumemode-8073                      3m27s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-g2gl5                          Created container agnhost\nvolumemode-8073                      3m27s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-g2gl5                          Started container agnhost\nvolumemode-8073                      2m19s       Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-g2gl5                          Stopping container agnhost\nvolumemode-8073                      2m59s       Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-5wn8-rpbnj                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolumemode-8073                      2m58s       Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-rpbnj                          Created container agnhost\nvolumemode-8073                      2m56s       Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-rpbnj                          Started container agnhost\nvolumemode-8073                      2m45s       Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-5wn8-rpbnj                          Stopping container agnhost\nvolumemode-8073                      3m19s       Warning   ProvisioningFailed           persistentvolumeclaim/pvc-ps7mc                                             storageclass.storage.k8s.io \"volumemode-8073\" not found\nvolumemode-8073                      3m8s        Normal    Scheduled                    pod/security-context-13b18133-cdcc-4c33-a3d4-bbed8868ab82                   Successfully assigned volumemode-8073/security-context-13b18133-cdcc-4c33-a3d4-bbed8868ab82 to bootstrap-e2e-minion-group-5wn8\nvolumemode-8073                      3m7s        Normal    Pulled                       pod/security-context-13b18133-cdcc-4c33-a3d4-bbed8868ab82                   Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolumemode-8073                      3m7s        Normal    Created                      pod/security-context-13b18133-cdcc-4c33-a3d4-bbed8868ab82                   Created container write-pod\nvolumemode-8073                      3m6s        Normal    Started                      pod/security-context-13b18133-cdcc-4c33-a3d4-bbed8868ab82                   Started container write-pod\nvolumemode-8073                      2m45s       Normal    Killing                      pod/security-context-13b18133-cdcc-4c33-a3d4-bbed8868ab82                   Stopping container write-pod\nwebhook-4950                         2m14s       Normal    Scheduled                    pod/sample-webhook-deployment-5f65f8c764-wbvz9                              Successfully assigned webhook-4950/sample-webhook-deployment-5f65f8c764-wbvz9 to bootstrap-e2e-minion-group-7htw\nwebhook-4950                         2m10s       Normal    Pulled                       pod/sample-webhook-deployment-5f65f8c764-wbvz9                              Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nwebhook-4950                         2m10s       Normal    Created                      pod/sample-webhook-deployment-5f65f8c764-wbvz9                              Created container sample-webhook\nwebhook-4950                         2m9s        Normal    Started                      pod/sample-webhook-deployment-5f65f8c764-wbvz9                              Started container sample-webhook\nwebhook-4950                         2m6s        Warning   Unhealthy                    pod/sample-webhook-deployment-5f65f8c764-wbvz9                              Readiness probe failed: Get https://10.64.1.18:8444/readyz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nwebhook-4950                         2m15s       Normal    SuccessfulCreate             replicaset/sample-webhook-deployment-5f65f8c764                             Created pod: sample-webhook-deployment-5f65f8c764-wbvz9\nwebhook-4950                         2m15s       Normal    ScalingReplicaSet            deployment/sample-webhook-deployment                                        Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1\nwebhook-803                          17s         Normal    Scheduled                    pod/sample-webhook-deployment-5f65f8c764-96jbr                              Successfully assigned webhook-803/sample-webhook-deployment-5f65f8c764-96jbr to bootstrap-e2e-minion-group-dwjn\nwebhook-803                          14s         Normal    Pulled                       pod/sample-webhook-deployment-5f65f8c764-96jbr                              Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nwebhook-803                          14s         Normal    Created                      pod/sample-webhook-deployment-5f65f8c764-96jbr                              Created container sample-webhook\nwebhook-803                          14s         Normal    Started                      pod/sample-webhook-deployment-5f65f8c764-96jbr                              Started container sample-webhook\nwebhook-803                          17s         Normal    SuccessfulCreate             replicaset/sample-webhook-deployment-5f65f8c764                             Created pod: sample-webhook-deployment-5f65f8c764-96jbr\nwebhook-803                          17s         Normal    ScalingReplicaSet            deployment/sample-webhook-deployment                                        Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1\nwebhook-8581                         3m44s       Normal    Scheduled                    pod/sample-webhook-deployment-5f65f8c764-l2p2m                              Successfully assigned webhook-8581/sample-webhook-deployment-5f65f8c764-l2p2m to bootstrap-e2e-minion-group-1s6w\nwebhook-8581                         3m39s       Normal    Pulled                       pod/sample-webhook-deployment-5f65f8c764-l2p2m                              Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nwebhook-8581                         3m39s       Normal    Created                      pod/sample-webhook-deployment-5f65f8c764-l2p2m                              Created container sample-webhook\nwebhook-8581                         3m37s       Normal    Started                      pod/sample-webhook-deployment-5f65f8c764-l2p2m                              Started container sample-webhook\nwebhook-8581                         3m44s       Normal    SuccessfulCreate             replicaset/sample-webhook-deployment-5f65f8c764                             Created pod: sample-webhook-deployment-5f65f8c764-l2p2m\nwebhook-8581                         3m44s       Normal    ScalingReplicaSet            deployment/sample-webhook-deployment                                        Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1\nwebhook-8704                         2m15s       Normal    Scheduled                    pod/sample-webhook-deployment-5f65f8c764-v6zmq                              Successfully assigned webhook-8704/sample-webhook-deployment-5f65f8c764-v6zmq to bootstrap-e2e-minion-group-7htw\nwebhook-8704                         2m10s       Normal    Pulled                       pod/sample-webhook-deployment-5f65f8c764-v6zmq                              Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nwebhook-8704                         2m10s       Normal    Created                      pod/sample-webhook-deployment-5f65f8c764-v6zmq                              Created container sample-webhook\nwebhook-8704                         2m9s        Normal    Started                      pod/sample-webhook-deployment-5f65f8c764-v6zmq                              Started container sample-webhook\nwebhook-8704                         2m15s       Normal    SuccessfulCreate             replicaset/sample-webhook-deployment-5f65f8c764                             Created pod: sample-webhook-deployment-5f65f8c764-v6zmq\nwebhook-8704                         2m16s       Normal    ScalingReplicaSet            deployment/sample-webhook-deployment                                        Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1\n"
Jan 16 03:26:37.143: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.18.233 --kubeconfig=/workspace/.kube/config get persistentvolumes --all-namespaces'
Jan 16 03:26:37.606: INFO: stderr: ""
Jan 16 03:26:37.606: INFO: stdout: "NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                                          STORAGECLASS                                                        REASON   AGE\nlocal-2wdqv                                2Gi        RWO            Retain           Available                                                    provisioning-639                                                             6s\nlocal-88ngz                                2Gi        RWO            Retain           Bound         provisioning-9197/pvc-66t9j                    provisioning-9197                                                            27s\nlocal-9n662                                2Gi        RWO            Retain           Bound         provisioning-8159/pvc-b42rh                    provisioning-8159                                                            19s\nlocal-nxqns                                2Gi        RWO            Retain           Bound         provisioning-2443/pvc-r42bj                    provisioning-2443                                                            30s\nlocal-pv5jgb8                              2Gi        RWO            Retain           Released      persistent-local-volumes-test-6257/pvc-hc9vj   local-volume-test-storageclass-persistent-local-volumes-test-6257            16m\nlocal-pv7l2lp                              2Gi        RWO            Retain           Released      persistent-local-volumes-test-9941/pvc-4hlxc   local-volume-test-storageclass-persistent-local-volumes-test-9941            10m\nlocal-pvqscpj                              2Gi        RWO            Retain           Bound         persistent-local-volumes-test-4308/pvc-d27db   local-volume-test-storageclass-persistent-local-volumes-test-4308            23s\nlocal-sq4d2                                2Gi        RWO            Retain           Bound         volume-9478/pvc-kpgvl                          volume-9478                                                                  12s\nlocal-wgwzh                                2Gi        RWO            Retain           Available                                                    provisioning-1620                                                            2s\npv1namepmzdzdx7jj                          3M         RWO            Retain           Available                                                                                                                                 1s\npvc-344de9ed-b6d6-4b45-aa20-49af4de68ad9   1Gi        RWO            Delete           Bound         pvc-protection-1491/pvc-protectionhqwcz        standard                                                                     21s\npvc-6dfba1bc-e6aa-4292-a568-d32047444e67   5Gi        RWO            Delete           Released      volume-945/gcepdx9qs4                          volume-945-gcepd-sc76dnc                                                     24s\npvc-73053dc2-fe36-4b60-999d-661d92ca6aed   5Gi        RWO            Delete           Bound         volume-expand-5571/gcepd4d2vw                  volume-expand-5571-gcepd-scz25z6                                             25s\npvc-88ffbfe2-5f36-4c0a-88a1-f80c0dc9af2a   1Gi        RWO            Delete           Terminating   csi-mock-volumes-5821/pvc-jlbm7                csi-mock-volumes-5821-sc                                                     8m16s\npvc-b18ba60c-f31c-4e26-a981-fa80026f484e   1Gi        RWO            Delete           Terminating   csi-mock-volumes-41/pvc-g7nqt                  csi-mock-volumes-41-sc                                                       4m7s\n"
Jan 16 03:26:38.154: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.18.233 --kubeconfig=/workspace/.kube/config get endpoints --all-namespaces'
Jan 16 03:26:38.986: INFO: stderr: ""
Jan 16 03:26:38.986: INFO: stdout: "NAMESPACE           NAME                          ENDPOINTS                                            AGE\ndefault             kubernetes                    34.83.18.233:443                                     20m\nephemeral-7260      csi-hostpath-attacher         <none>                                               8s\nephemeral-7260      csi-hostpath-provisioner      <none>                                               6s\nephemeral-7260      csi-hostpath-resizer          <none>                                               6s\nephemeral-7260      csi-hostpathplugin            <none>                                               7s\nephemeral-7260      csi-snapshotter               <none>                                               6s\nkube-system         default-http-backend          10.64.2.2:8080                                       19m\nkube-system         kube-controller-manager       <none>                                               20m\nkube-system         kube-dns                      10.64.1.3:53,10.64.2.3:53,10.64.1.3:53 + 3 more...   19m\nkube-system         kube-scheduler                <none>                                               20m\nkube-system         kubernetes-dashboard          10.64.2.4:8443                                       19m\nkube-system         metrics-server                10.64.2.5:443                                        19m\nkubectl-426         agnhost-master                <none>                                               47s\nkubectl-9342        ep1namepmzdzdx7jj             192.168.3.1:8000                                     0s\nprovisioning-8958   gluster-server                10.64.1.61:24007                                     2s\nservices-1811       service-headless              10.64.2.235:9376,10.64.3.220:9376,10.64.4.211:9376   4m1s\nservices-1811       service-headless-toggled      10.64.1.249:9376,10.64.2.239:9376,10.64.3.227:9376   3m39s\nservices-6510       hairpin-test                  10.64.4.230:8080                                     3m8s\nservices-7709       up-down-2                     10.64.1.232:9376,10.64.2.211:9376,10.64.4.195:9376   5m18s\nservices-7709       up-down-3                     10.64.1.4:9376,10.64.3.231:9376,10.64.4.223:9376     3m22s\nstatefulset-289     test                          <none>                                               4m55s\nstatefulset-4270    test                          <none>                                               9m50s\nstatefulset-6251    test                          <none>                                               106s\nstatefulset-7464    test                          10.64.1.58:80,10.64.3.254:80,10.64.4.248:80          2m22s\nstatefulset-8127    test                          10.64.2.23:80                                        99s\nsvc-latency-7096    latency-svc-28dm6             10.64.1.20:80                                        83s\nsvc-latency-7096    latency-svc-29x6s             10.64.1.20:80                                        109s\nsvc-latency-7096    latency-svc-2rsnk             10.64.1.20:80                                        117s\nsvc-latency-7096    latency-svc-2s9sj             10.64.1.20:80                                        64s\nsvc-latency-7096    latency-svc-2xgnv             10.64.1.20:80                                        96s\nsvc-latency-7096    latency-svc-427r2             10.64.1.20:80                                        113s\nsvc-latency-7096    latency-svc-45wwc             10.64.1.20:80                                        104s\nsvc-latency-7096    latency-svc-46ff2             10.64.1.20:80                                        64s\nsvc-latency-7096    latency-svc-47xgp             10.64.1.20:80                                        56s\nsvc-latency-7096    latency-svc-48dvw             10.64.1.20:80                                        101s\nsvc-latency-7096    latency-svc-4mrhj             10.64.1.20:80                                        118s\nsvc-latency-7096    latency-svc-4nmq5             10.64.1.20:80                                        88s\nsvc-latency-7096    latency-svc-4pbl4             10.64.1.20:80                                        81s\nsvc-latency-7096    latency-svc-4pzqf             10.64.1.20:80                                        60s\nsvc-latency-7096    latency-svc-4t7rh             10.64.1.20:80                                        109s\nsvc-latency-7096    latency-svc-52vhl             10.64.1.20:80                                        116s\nsvc-latency-7096    latency-svc-55rgz             10.64.1.20:80                                        103s\nsvc-latency-7096    latency-svc-58f8c             10.64.1.20:80                                        69s\nsvc-latency-7096    latency-svc-58k29             10.64.1.20:80                                        82s\nsvc-latency-7096    latency-svc-5b2br             10.64.1.20:80                                        81s\nsvc-latency-7096    latency-svc-5b5nz             10.64.1.20:80                                        90s\nsvc-latency-7096    latency-svc-5bqnt             10.64.1.20:80                                        96s\nsvc-latency-7096    latency-svc-5fj6s             10.64.1.20:80                                        111s\nsvc-latency-7096    latency-svc-5v62z             10.64.1.20:80                                        53s\nsvc-latency-7096    latency-svc-5wcph             10.64.1.20:80                                        75s\nsvc-latency-7096    latency-svc-65jhx             10.64.1.20:80                                        65s\nsvc-latency-7096    latency-svc-69fgc             10.64.1.20:80                                        104s\nsvc-latency-7096    latency-svc-6b6zf             10.64.1.20:80                                        61s\nsvc-latency-7096    latency-svc-6f52z             10.64.1.20:80                                        94s\nsvc-latency-7096    latency-svc-6lc7z             10.64.1.20:80                                        57s\nsvc-latency-7096    latency-svc-6lw2p             10.64.1.20:80                                        79s\nsvc-latency-7096    latency-svc-6mcbf             10.64.1.20:80                                        51s\nsvc-latency-7096    latency-svc-6mdqx             10.64.1.20:80                                        57s\nsvc-latency-7096    latency-svc-6v4bt             10.64.1.20:80                                        54s\nsvc-latency-7096    latency-svc-79wd6             10.64.1.20:80                                        99s\nsvc-latency-7096    latency-svc-7fwmm             10.64.1.20:80                                        59s\nsvc-latency-7096    latency-svc-7ktkc             10.64.1.20:80                                        111s\nsvc-latency-7096    latency-svc-7rj86             10.64.1.20:80                                        85s\nsvc-latency-7096    latency-svc-7tdgg             10.64.1.20:80                                        62s\nsvc-latency-7096    latency-svc-7zzfc             10.64.1.20:80                                        104s\nsvc-latency-7096    latency-svc-84fx6             10.64.1.20:80                                        73s\nsvc-latency-7096    latency-svc-86jzh             10.64.1.20:80                                        89s\nsvc-latency-7096    latency-svc-8hmqw             10.64.1.20:80                                        98s\nsvc-latency-7096    latency-svc-8j97r             10.64.1.20:80                                        54s\nsvc-latency-7096    latency-svc-8vx4g             10.64.1.20:80                                        58s\nsvc-latency-7096    latency-svc-8wz2q             10.64.1.20:80                                        99s\nsvc-latency-7096    latency-svc-8x2pz             10.64.1.20:80                                        61s\nsvc-latency-7096    latency-svc-92rjk             10.64.1.20:80                                        89s\nsvc-latency-7096    latency-svc-97wbq             10.64.1.20:80                                        72s\nsvc-latency-7096    latency-svc-98gf8             10.64.1.20:80                                        53s\nsvc-latency-7096    latency-svc-9crzc             10.64.1.20:80                                        70s\nsvc-latency-7096    latency-svc-9fnc2             10.64.1.20:80                                        113s\nsvc-latency-7096    latency-svc-9kctz             10.64.1.20:80                                        79s\nsvc-latency-7096    latency-svc-9qs26             10.64.1.20:80                                        80s\nsvc-latency-7096    latency-svc-9s5pl             10.64.1.20:80                                        65s\nsvc-latency-7096    latency-svc-9xtk5             10.64.1.20:80                                        81s\nsvc-latency-7096    latency-svc-b2djr             10.64.1.20:80                                        93s\nsvc-latency-7096    latency-svc-bf22n             10.64.1.20:80                                        91s\nsvc-latency-7096    latency-svc-bf2gf             10.64.1.20:80                                        63s\nsvc-latency-7096    latency-svc-bk2jt             10.64.1.20:80                                        97s\nsvc-latency-7096    latency-svc-bkkgb             10.64.1.20:80                                        73s\nsvc-latency-7096    latency-svc-bpcqd             10.64.1.20:80                                        93s\nsvc-latency-7096    latency-svc-cbbmv             10.64.1.20:80                                        106s\nsvc-latency-7096    latency-svc-cfh8q             10.64.1.20:80                                        55s\nsvc-latency-7096    latency-svc-cmffn             10.64.1.20:80                                        92s\nsvc-latency-7096    latency-svc-cts78             10.64.1.20:80                                        60s\nsvc-latency-7096    latency-svc-cv85t             10.64.1.20:80                                        115s\nsvc-latency-7096    latency-svc-cxjf5             10.64.1.20:80                                        94s\nsvc-latency-7096    latency-svc-czpdj             10.64.1.20:80                                        105s\nsvc-latency-7096    latency-svc-d6bfb             10.64.1.20:80                                        59s\nsvc-latency-7096    latency-svc-d9cck             10.64.1.20:80                                        106s\nsvc-latency-7096    latency-svc-dd7vh             10.64.1.20:80                                        54s\nsvc-latency-7096    latency-svc-dfdch             10.64.1.20:80                                        78s\nsvc-latency-7096    latency-svc-dnw6z             10.64.1.20:80                                        110s\nsvc-latency-7096    latency-svc-dprw9             10.64.1.20:80                                        75s\nsvc-latency-7096    latency-svc-dwxs5             10.64.1.20:80                                        61s\nsvc-latency-7096    latency-svc-f5hhl             10.64.1.20:80                                        115s\nsvc-latency-7096    latency-svc-f5l6p             10.64.1.20:80                                        52s\nsvc-latency-7096    latency-svc-f7p5r             10.64.1.20:80                                        73s\nsvc-latency-7096    latency-svc-f8r9j             10.64.1.20:80                                        80s\nsvc-latency-7096    latency-svc-fds7h             10.64.1.20:80                                        77s\nsvc-latency-7096    latency-svc-fqb7s             10.64.1.20:80                                        118s\nsvc-latency-7096    latency-svc-ftcrw             10.64.1.20:80                                        108s\nsvc-latency-7096    latency-svc-fzxlj             10.64.1.20:80                                        99s\nsvc-latency-7096    latency-svc-g9f57             10.64.1.20:80                                        97s\nsvc-latency-7096    latency-svc-gjbhg             10.64.1.20:80                                        69s\nsvc-latency-7096    latency-svc-gpt69             10.64.1.20:80                                        117s\nsvc-latency-7096    latency-svc-gsfnz             10.64.1.20:80                                        117s\nsvc-latency-7096    latency-svc-gwjlz             10.64.1.20:80                                        116s\nsvc-latency-7096    latency-svc-h72mm             10.64.1.20:80                                        58s\nsvc-latency-7096    latency-svc-h7bf8             10.64.1.20:80                                        82s\nsvc-latency-7096    latency-svc-hgbrn             10.64.1.20:80                                        84s\nsvc-latency-7096    latency-svc-hm5cb             10.64.1.20:80                                        60s\nsvc-latency-7096    latency-svc-hr922             10.64.1.20:80                                        93s\nsvc-latency-7096    latency-svc-hs6hl             10.64.1.20:80                                        52s\nsvc-latency-7096    latency-svc-j2vnq             10.64.1.20:80                                        110s\nsvc-latency-7096    latency-svc-j8flr             10.64.1.20:80                                        67s\nsvc-latency-7096    latency-svc-jc452             10.64.1.20:80                                        100s\nsvc-latency-7096    latency-svc-jf6ds             10.64.1.20:80                                        55s\nsvc-latency-7096    latency-svc-jq5st             10.64.1.20:80                                        102s\nsvc-latency-7096    latency-svc-jz685             10.64.1.20:80                                        55s\nsvc-latency-7096    latency-svc-k25hf             10.64.1.20:80                                        93s\nsvc-latency-7096    latency-svc-k59nr             10.64.1.20:80                                        84s\nsvc-latency-7096    latency-svc-k7pjq             10.64.1.20:80                                        74s\nsvc-latency-7096    latency-svc-k8qdj             10.64.1.20:80                                        119s\nsvc-latency-7096    latency-svc-kcf5g             10.64.1.20:80                                        83s\nsvc-latency-7096    latency-svc-kqdjg             10.64.1.20:80                                        85s\nsvc-latency-7096    latency-svc-kwv99             10.64.1.20:80                                        71s\nsvc-latency-7096    latency-svc-kzp62             10.64.1.20:80                                        64s\nsvc-latency-7096    latency-svc-l6vhj             10.64.1.20:80                                        60s\nsvc-latency-7096    latency-svc-lb26f             10.64.1.20:80                                        99s\nsvc-latency-7096    latency-svc-lbdt8             10.64.1.20:80                                        87s\nsvc-latency-7096    latency-svc-lsx5f             10.64.1.20:80                                        88s\nsvc-latency-7096    latency-svc-lvbrp             10.64.1.20:80                                        98s\nsvc-latency-7096    latency-svc-m4xx8             10.64.1.20:80                                        102s\nsvc-latency-7096    latency-svc-mbqfq             10.64.1.20:80                                        116s\nsvc-latency-7096    latency-svc-mg78r             10.64.1.20:80                                        70s\nsvc-latency-7096    latency-svc-mhdvd             10.64.1.20:80                                        110s\nsvc-latency-7096    latency-svc-mk55m             10.64.1.20:80                                        118s\nsvc-latency-7096    latency-svc-mkbbl             10.64.1.20:80                                        86s\nsvc-latency-7096    latency-svc-mwqk7             10.64.1.20:80                                        98s\nsvc-latency-7096    latency-svc-mx45n             10.64.1.20:80                                        71s\nsvc-latency-7096    latency-svc-n2ssn             10.64.1.20:80                                        108s\nsvc-latency-7096    latency-svc-n4dsm             10.64.1.20:80                                        76s\nsvc-latency-7096    latency-svc-nd99w             10.64.1.20:80                                        61s\nsvc-latency-7096    latency-svc-ndq7m             10.64.1.20:80                                        53s\nsvc-latency-7096    latency-svc-nfxd9             10.64.1.20:80                                        119s\nsvc-latency-7096    latency-svc-nm4rv             10.64.1.20:80                                        104s\nsvc-latency-7096    latency-svc-nmxv7             10.64.1.20:80                                        96s\nsvc-latency-7096    latency-svc-nnkds             10.64.1.20:80                                        111s\nsvc-latency-7096    latency-svc-nrfcz             10.64.1.20:80                                        51s\nsvc-latency-7096    latency-svc-pb2td             10.64.1.20:80                                        63s\nsvc-latency-7096    latency-svc-pbwpj             10.64.1.20:80                                        95s\nsvc-latency-7096    latency-svc-phv55             10.64.1.20:80                                        116s\nsvc-latency-7096    latency-svc-pm8st             10.64.1.20:80                                        106s\nsvc-latency-7096    latency-svc-pprgb             10.64.1.20:80                                        59s\nsvc-latency-7096    latency-svc-pstxx             10.64.1.20:80                                        71s\nsvc-latency-7096    latency-svc-pxjvb             10.64.1.20:80                                        100s\nsvc-latency-7096    latency-svc-pzvbs             10.64.1.20:80                                        89s\nsvc-latency-7096    latency-svc-q28jx             10.64.1.20:80                                        59s\nsvc-latency-7096    latency-svc-q2rn7             10.64.1.20:80                                        118s\nsvc-latency-7096    latency-svc-q2zrm             10.64.1.20:80                                        87s\nsvc-latency-7096    latency-svc-q89hv             10.64.1.20:80                                        63s\nsvc-latency-7096    latency-svc-qcnxk             10.64.1.20:80                                        107s\nsvc-latency-7096    latency-svc-qcv7d             10.64.1.20:80                                        92s\nsvc-latency-7096    latency-svc-qdwkm             10.64.1.20:80                                        59s\nsvc-latency-7096    latency-svc-qhhqk             10.64.1.20:80                                        97s\nsvc-latency-7096    latency-svc-qjrsh             10.64.1.20:80                                        68s\nsvc-latency-7096    latency-svc-qmcd2             10.64.1.20:80                                        73s\nsvc-latency-7096    latency-svc-qrhxh             10.64.1.20:80                                        95s\nsvc-latency-7096    latency-svc-qsf9f             10.64.1.20:80                                        54s\nsvc-latency-7096    latency-svc-qstvp             10.64.1.20:80                                        62s\nsvc-latency-7096    latency-svc-rgfjv             10.64.1.20:80                                        80s\nsvc-latency-7096    latency-svc-rlvvm             10.64.1.20:80                                        106s\nsvc-latency-7096    latency-svc-rscsv             10.64.1.20:80                                        85s\nsvc-latency-7096    latency-svc-rtb4r             10.64.1.20:80                                        91s\nsvc-latency-7096    latency-svc-s42j2             10.64.1.20:80                                        107s\nsvc-latency-7096    latency-svc-s9974             10.64.1.20:80                                        80s\nsvc-latency-7096    latency-svc-s9cgv             10.64.1.20:80                                        114s\nsvc-latency-7096    latency-svc-scnjl             10.64.1.20:80                                        61s\nsvc-latency-7096    latency-svc-sctzm             10.64.1.20:80                                        52s\nsvc-latency-7096    latency-svc-sgr55             10.64.1.20:80                                        110s\nsvc-latency-7096    latency-svc-sgws8             10.64.1.20:80                                        90s\nsvc-latency-7096    latency-svc-smdbr             10.64.1.20:80                                        72s\nsvc-latency-7096    latency-svc-smkkx             10.64.1.20:80                                        66s\nsvc-latency-7096    latency-svc-svmmq             10.64.1.20:80                                        66s\nsvc-latency-7096    latency-svc-t2pcn             10.64.1.20:80                                        67s\nsvc-latency-7096    latency-svc-t6gbn             10.64.1.20:80                                        90s\nsvc-latency-7096    latency-svc-tbblg             10.64.1.20:80                                        54s\nsvc-latency-7096    latency-svc-tlgn9             10.64.1.20:80                                        118s\nsvc-latency-7096    latency-svc-tv8ht             10.64.1.20:80                                        52s\nsvc-latency-7096    latency-svc-v5n4j             10.64.1.20:80                                        73s\nsvc-latency-7096    latency-svc-v5x4l             10.64.1.20:80                                        61s\nsvc-latency-7096    latency-svc-v9j8w             10.64.1.20:80                                        70s\nsvc-latency-7096    latency-svc-vfjlj             10.64.1.20:80                                        101s\nsvc-latency-7096    latency-svc-vhlfn             10.64.1.20:80                                        53s\nsvc-latency-7096    latency-svc-vsrmk             10.64.1.20:80                                        62s\nsvc-latency-7096    latency-svc-vzzxs             10.64.1.20:80                                        77s\nsvc-latency-7096    latency-svc-w2kc6             10.64.1.20:80                                        111s\nsvc-latency-7096    latency-svc-w2rlq             10.64.1.20:80                                        112s\nsvc-latency-7096    latency-svc-w5l9j             10.64.1.20:80                                        78s\nsvc-latency-7096    latency-svc-wbhhs             10.64.1.20:80                                        68s\nsvc-latency-7096    latency-svc-wctdb             10.64.1.20:80                                        109s\nsvc-latency-7096    latency-svc-wjmdh             10.64.1.20:80                                        55s\nsvc-latency-7096    latency-svc-wjtr2             10.64.1.20:80                                        64s\nsvc-latency-7096    latency-svc-wvcqw             10.64.1.20:80                                        76s\nsvc-latency-7096    latency-svc-wxlbh             10.64.1.20:80                                        74s\nsvc-latency-7096    latency-svc-wz8s9             10.64.1.20:80                                        68s\nsvc-latency-7096    latency-svc-xkrnw             10.64.1.20:80                                        89s\nsvc-latency-7096    latency-svc-xnjff             10.64.1.20:80                                        87s\nsvc-latency-7096    latency-svc-xplcx             10.64.1.20:80                                        59s\nsvc-latency-7096    latency-svc-xshv6             10.64.1.20:80                                        110s\nsvc-latency-7096    latency-svc-xsqj4             10.64.1.20:80                                        105s\nsvc-latency-7096    latency-svc-xwjmk             10.64.1.20:80                                        62s\nsvc-latency-7096    latency-svc-xz7sx             10.64.1.20:80                                        84s\nsvc-latency-7096    latency-svc-xzmfg             10.64.1.20:80                                        51s\nsvc-latency-7096    latency-svc-z66hw             10.64.1.20:80                                        72s\nsvc-latency-7096    latency-svc-zbjbj             10.64.1.20:80                                        105s\nsvc-latency-7096    latency-svc-zhgt9             10.64.1.20:80                                        99s\nsvc-latency-7096    latency-svc-zhwnh             10.64.1.20:80                                        103s\nsvc-latency-7096    latency-svc-ztwnl             10.64.1.20:80                                        65s\nvolume-1118         example.com-nfs-volume-1118   <none>                                               4m26s\nwebhook-803         e2e-test-webhook              10.64.2.20:8444                                      6s\n"
... skipping 26 lines ...
Jan 16 03:26:55.397: INFO: stdout: "NAMESPACE          NAME                         READY   AGE\nephemeral-7260     csi-hostpath-attacher        0/1     25s\nephemeral-7260     csi-hostpath-provisioner     1/1     23s\nephemeral-7260     csi-hostpath-resizer         0/1     23s\nephemeral-7260     csi-hostpathplugin           1/1     24s\nephemeral-7260     csi-snapshotter              1/1     23s\nkube-system        volume-snapshot-controller   1/1     20m\nkubectl-9342       ss3pmzdzdx7jj                0/1     1s\nstatefulset-7464   ss2                          2/3     2m39s\nstatefulset-8127   ss2                          1/3     116s\nvolumemode-9881    csi-hostpath-attacher        0/1     13s\nvolumemode-9881    csi-hostpath-provisioner     0/1     11s\nvolumemode-9881    csi-hostpath-resizer         0/1     10s\nvolumemode-9881    csi-hostpathplugin           0/1     12s\nvolumemode-9881    csi-snapshotter              0/1     10s\n"
Jan 16 03:26:56.149: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.18.233 --kubeconfig=/workspace/.kube/config get deployments --all-namespaces'
Jan 16 03:26:56.848: INFO: stderr: ""
Jan 16 03:26:56.848: INFO: stdout: "NAMESPACE         NAME                        READY   UP-TO-DATE   AVAILABLE   AGE\ndeployment-6952   test-rollover-deployment    1/1     1            1           5m12s\ndeployment-8124   test-cleanup-deployment     1/1     1            1           4m11s\nkube-system       coredns                     2/2     2            2           20m\nkube-system       event-exporter-v0.3.1       1/1     1            1           20m\nkube-system       fluentd-gcp-scaler          1/1     1            1           20m\nkube-system       kube-dns-autoscaler         1/1     1            1           20m\nkube-system       kubernetes-dashboard        1/1     1            1           20m\nkube-system       l7-default-backend          1/1     1            1           20m\nkube-system       metrics-server-v0.3.6       1/1     1            1           20m\nkubectl-9342      deployment4pmzdzdx7jj       0/1     0            0           1s\nwebhook-7597      sample-webhook-deployment   1/1     1            1           10s\n"
Jan 16 03:26:57.702: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.18.233 --kubeconfig=/workspace/.kube/config get events --all-namespaces'
Jan 16 03:27:03.955: INFO: stderr: ""
Jan 16 03:27:03.955: INFO: stdout: "NAMESPACE                            LAST SEEN   TYPE      REASON                       OBJECT                                                                      MESSAGE\nconfigmap-1684                       6m12s       Normal    Scheduled                    pod/pod-configmaps-f8a47892-096c-4fde-9c58-a4da56195591                     Successfully assigned configmap-1684/pod-configmaps-f8a47892-096c-4fde-9c58-a4da56195591 to bootstrap-e2e-minion-group-5wn8\nconfigmap-1684                       6m8s        Normal    Pulled                       pod/pod-configmaps-f8a47892-096c-4fde-9c58-a4da56195591                     Container image \"docker.io/library/busybox:1.29\" already present on machine\nconfigmap-1684                       6m8s        Normal    Created                      pod/pod-configmaps-f8a47892-096c-4fde-9c58-a4da56195591                     Created container env-test\nconfigmap-1684                       6m5s        Normal    Started                      pod/pod-configmaps-f8a47892-096c-4fde-9c58-a4da56195591                     Started container env-test\nconfigmap-1884                       47s         Normal    Scheduled                    pod/pod-configmaps-be710e9c-a9bf-4c4f-aefe-76942e37cbdb                     Successfully assigned configmap-1884/pod-configmaps-be710e9c-a9bf-4c4f-aefe-76942e37cbdb to bootstrap-e2e-minion-group-5wn8\nconfigmap-1884                       44s         Normal    Pulled                       pod/pod-configmaps-be710e9c-a9bf-4c4f-aefe-76942e37cbdb                     Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nconfigmap-1884                       44s         Normal    Created                      pod/pod-configmaps-be710e9c-a9bf-4c4f-aefe-76942e37cbdb                     Created container configmap-volume-test\nconfigmap-1884                       44s         Normal    Started                      pod/pod-configmaps-be710e9c-a9bf-4c4f-aefe-76942e37cbdb                     Started container configmap-volume-test\nconfigmap-5287                       16s         Normal    Scheduled                    pod/pod-configmaps-3c88ec65-7eaa-4821-8af0-900ed69ede84                     Successfully assigned configmap-5287/pod-configmaps-3c88ec65-7eaa-4821-8af0-900ed69ede84 to bootstrap-e2e-minion-group-dwjn\nconfigmap-5287                       15s         Warning   FailedMount                  pod/pod-configmaps-3c88ec65-7eaa-4821-8af0-900ed69ede84                     MountVolume.SetUp failed for volume \"default-token-t5tk8\" : failed to sync secret cache: timed out waiting for the condition\nconfigmap-5287                       11s         Normal    Pulled                       pod/pod-configmaps-3c88ec65-7eaa-4821-8af0-900ed69ede84                     Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nconfigmap-5287                       11s         Normal    Created                      pod/pod-configmaps-3c88ec65-7eaa-4821-8af0-900ed69ede84                     Created container configmap-volume-test\nconfigmap-5287                       10s         Normal    Started                      pod/pod-configmaps-3c88ec65-7eaa-4821-8af0-900ed69ede84                     Started container configmap-volume-test\nconfigmap-6774                       83s         Normal    Scheduled                    pod/pod-configmaps-ab22f77c-8bb3-464a-a5a0-cfae997f921d                     Successfully assigned configmap-6774/pod-configmaps-ab22f77c-8bb3-464a-a5a0-cfae997f921d to bootstrap-e2e-minion-group-5wn8\nconfigmap-6774                       82s         Warning   FailedMount                  pod/pod-configmaps-ab22f77c-8bb3-464a-a5a0-cfae997f921d                     MountVolume.SetUp failed for volume \"default-token-xx7vt\" : failed to sync secret cache: timed out waiting for the condition\nconfigmap-6774                       82s         Warning   FailedMount                  pod/pod-configmaps-ab22f77c-8bb3-464a-a5a0-cfae997f921d                     MountVolume.SetUp failed for volume \"configmap-volume\" : failed to sync configmap cache: timed out waiting for the condition\nconfigmap-6774                       80s         Normal    Pulled                       pod/pod-configmaps-ab22f77c-8bb3-464a-a5a0-cfae997f921d                     Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nconfigmap-6774                       80s         Normal    Created                      pod/pod-configmaps-ab22f77c-8bb3-464a-a5a0-cfae997f921d                     Created container configmap-volume-test\nconfigmap-6774                       80s         Normal    Started                      pod/pod-configmaps-ab22f77c-8bb3-464a-a5a0-cfae997f921d                     Started container configmap-volume-test\nconfigmap-7784                       4m19s       Normal    Scheduled                    pod/pod-configmaps-bd85fea7-d547-47c6-929e-341e5ed8cd90                     Successfully assigned configmap-7784/pod-configmaps-bd85fea7-d547-47c6-929e-341e5ed8cd90 to bootstrap-e2e-minion-group-5wn8\nconfigmap-7784                       4m18s       Warning   FailedMount                  pod/pod-configmaps-bd85fea7-d547-47c6-929e-341e5ed8cd90                     MountVolume.SetUp failed for volume \"default-token-tnmnp\" : failed to sync secret cache: timed out waiting for the condition\nconfigmap-7784                       4m18s       Warning   FailedMount                  pod/pod-configmaps-bd85fea7-d547-47c6-929e-341e5ed8cd90                     MountVolume.SetUp failed for volume \"configmap-volume\" : failed to sync configmap cache: timed out waiting for the condition\nconfigmap-7784                       4m16s       Normal    Pulled                       pod/pod-configmaps-bd85fea7-d547-47c6-929e-341e5ed8cd90                     Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nconfigmap-7784                       4m16s       Normal    Created                      pod/pod-configmaps-bd85fea7-d547-47c6-929e-341e5ed8cd90                     Created container configmap-volume-data-test\nconfigmap-7784                       4m16s       Normal    Started                      pod/pod-configmaps-bd85fea7-d547-47c6-929e-341e5ed8cd90                     Started container configmap-volume-data-test\nconfigmap-7784                       4m16s       Normal    Pulled                       pod/pod-configmaps-bd85fea7-d547-47c6-929e-341e5ed8cd90                     Container image \"docker.io/library/busybox:1.29\" already present on machine\nconfigmap-7784                       4m16s       Normal    Created                      pod/pod-configmaps-bd85fea7-d547-47c6-929e-341e5ed8cd90                     Created container configmap-volume-binary-test\nconfigmap-7784                       4m15s       Normal    Started                      pod/pod-configmaps-bd85fea7-d547-47c6-929e-341e5ed8cd90                     Started container configmap-volume-binary-test\ncontainer-lifecycle-hook-1822        4m49s       Normal    Scheduled                    pod/pod-handle-http-request                                                 Successfully assigned container-lifecycle-hook-1822/pod-handle-http-request to bootstrap-e2e-minion-group-dwjn\ncontainer-lifecycle-hook-1822        4m48s       Warning   FailedMount                  pod/pod-handle-http-request                                                 MountVolume.SetUp failed for volume \"default-token-tbbjx\" : failed to sync secret cache: timed out waiting for the condition\ncontainer-lifecycle-hook-1822        4m46s       Normal    Pulled                       pod/pod-handle-http-request                                                 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ncontainer-lifecycle-hook-1822        4m46s       Normal    Created                      pod/pod-handle-http-request                                                 Created container pod-handle-http-request\ncontainer-lifecycle-hook-1822        4m46s       Normal    Started                      pod/pod-handle-http-request                                                 Started container pod-handle-http-request\ncontainer-lifecycle-hook-1822        4m43s       Normal    Scheduled                    pod/pod-with-poststart-http-hook                                            Successfully assigned container-lifecycle-hook-1822/pod-with-poststart-http-hook to bootstrap-e2e-minion-group-5wn8\ncontainer-lifecycle-hook-1822        4m41s       Normal    Pulled                       pod/pod-with-poststart-http-hook                                            Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncontainer-lifecycle-hook-1822        4m41s       Normal    Created                      pod/pod-with-poststart-http-hook                                            Created container pod-with-poststart-http-hook\ncontainer-lifecycle-hook-1822        4m41s       Normal    Started                      pod/pod-with-poststart-http-hook                                            Started container pod-with-poststart-http-hook\ncontainer-lifecycle-hook-1822        4m37s       Normal    Killing                      pod/pod-with-poststart-http-hook                                            Stopping container pod-with-poststart-http-hook\ncontainer-probe-3746                 6m58s       Normal    Scheduled                    pod/busybox-5d68fabd-7b69-4860-9ca1-2860ea295d58                            Successfully assigned container-probe-3746/busybox-5d68fabd-7b69-4860-9ca1-2860ea295d58 to bootstrap-e2e-minion-group-1s6w\ncontainer-probe-3746                 6m          Normal    Pulled                       pod/busybox-5d68fabd-7b69-4860-9ca1-2860ea295d58                            Container image \"docker.io/library/busybox:1.29\" already present on machine\ncontainer-probe-3746                 6m          Normal    Created                      pod/busybox-5d68fabd-7b69-4860-9ca1-2860ea295d58                            Created container busybox\ncontainer-probe-3746                 5m59s       Normal    Started                      pod/busybox-5d68fabd-7b69-4860-9ca1-2860ea295d58                            Started container busybox\ncontainer-probe-3746                 6m33s       Warning   Unhealthy                    pod/busybox-5d68fabd-7b69-4860-9ca1-2860ea295d58                            Liveness probe failed: cat: can't open '/tmp/health': No such file or directory\ncontainer-probe-3746                 6m32s       Normal    Killing                      pod/busybox-5d68fabd-7b69-4860-9ca1-2860ea295d58                            Container busybox failed liveness probe, will be restarted\ncontainer-probe-3746                 5m56s       Normal    Killing                      pod/busybox-5d68fabd-7b69-4860-9ca1-2860ea295d58                            Stopping container busybox\ncontainer-probe-4675                 5m23s       Normal    Scheduled                    pod/liveness-a4e9a17d-8a89-4358-a52a-59447297c996                           Successfully assigned container-probe-4675/liveness-a4e9a17d-8a89-4358-a52a-59447297c996 to bootstrap-e2e-minion-group-5wn8\ncontainer-probe-4675                 5m22s       Normal    Pulled                       pod/liveness-a4e9a17d-8a89-4358-a52a-59447297c996                           Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ncontainer-probe-4675                 5m21s       Normal    Created                      pod/liveness-a4e9a17d-8a89-4358-a52a-59447297c996                           Created container liveness\ncontainer-probe-4675                 5m21s       Normal    Started                      pod/liveness-a4e9a17d-8a89-4358-a52a-59447297c996                           Started container liveness\ncontainer-probe-4675                 88s         Warning   ProbeWarning                 pod/liveness-a4e9a17d-8a89-4358-a52a-59447297c996                           Liveness probe warning: <a href=\"http://0.0.0.0/\">Found</a>.\ncontainer-probe-7651                 54s         Normal    Scheduled                    pod/test-webserver-853d1d8f-857e-436f-9d9a-b15aaf2d5903                     Successfully assigned container-probe-7651/test-webserver-853d1d8f-857e-436f-9d9a-b15aaf2d5903 to bootstrap-e2e-minion-group-7htw\ncontainer-probe-7651                 50s         Normal    Pulled                       pod/test-webserver-853d1d8f-857e-436f-9d9a-b15aaf2d5903                     Container image \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\" already present on machine\ncontainer-probe-7651                 49s         Normal    Created                      pod/test-webserver-853d1d8f-857e-436f-9d9a-b15aaf2d5903                     Created container test-webserver\ncontainer-probe-7651                 47s         Normal    Started                      pod/test-webserver-853d1d8f-857e-436f-9d9a-b15aaf2d5903                     Started container test-webserver\ncontainer-probe-7651                 6s          Warning   Unhealthy                    pod/test-webserver-853d1d8f-857e-436f-9d9a-b15aaf2d5903                     Readiness probe failed: Get http://10.64.1.56:81/: dial tcp 10.64.1.56:81: connect: connection refused\ncontainer-probe-8796                 4m9s        Normal    Scheduled                    pod/busybox-06af0222-ab37-4d06-88f3-1b94e0707138                            Successfully assigned container-probe-8796/busybox-06af0222-ab37-4d06-88f3-1b94e0707138 to bootstrap-e2e-minion-group-1s6w\ncontainer-probe-8796                 4m3s        Normal    Pulled                       pod/busybox-06af0222-ab37-4d06-88f3-1b94e0707138                            Container image \"docker.io/library/busybox:1.29\" already present on machine\ncontainer-probe-8796                 4m3s        Normal    Created                      pod/busybox-06af0222-ab37-4d06-88f3-1b94e0707138                            Created container busybox\ncontainer-probe-8796                 4m1s        Normal    Started                      pod/busybox-06af0222-ab37-4d06-88f3-1b94e0707138                            Started container busybox\ncontainer-probe-9513                 3m4s        Normal    Scheduled                    pod/liveness-d952e7ed-9b65-41cb-b0b0-9d19facaeff6                           Successfully assigned container-probe-9513/liveness-d952e7ed-9b65-41cb-b0b0-9d19facaeff6 to bootstrap-e2e-minion-group-7htw\ncontainer-probe-9513                 73s         Normal    Pulled                       pod/liveness-d952e7ed-9b65-41cb-b0b0-9d19facaeff6                           Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ncontainer-probe-9513                 72s         Normal    Created                      pod/liveness-d952e7ed-9b65-41cb-b0b0-9d19facaeff6                           Created container liveness\ncontainer-probe-9513                 117s        Normal    Started                      pod/liveness-d952e7ed-9b65-41cb-b0b0-9d19facaeff6                           Started container liveness\ncontainer-probe-9513                 101s        Warning   Unhealthy                    pod/liveness-d952e7ed-9b65-41cb-b0b0-9d19facaeff6                           Liveness probe failed: HTTP probe failed with statuscode: 500\ncontainer-probe-9513                 101s        Normal    Killing                      pod/liveness-d952e7ed-9b65-41cb-b0b0-9d19facaeff6                           Container liveness failed liveness probe, will be restarted\ncontainer-probe-9513                 87s         Warning   BackOff                      pod/liveness-d952e7ed-9b65-41cb-b0b0-9d19facaeff6                           Back-off restarting failed container\ncontainer-runtime-1074               93s         Normal    Scheduled                    pod/termination-message-container2c4973e2-9278-4bc2-bf86-610c608e2687       Successfully assigned container-runtime-1074/termination-message-container2c4973e2-9278-4bc2-bf86-610c608e2687 to bootstrap-e2e-minion-group-7htw\ncontainer-runtime-1074               90s         Normal    Pulled                       pod/termination-message-container2c4973e2-9278-4bc2-bf86-610c608e2687       Container image \"docker.io/library/busybox:1.29\" already present on machine\ncontainer-runtime-1074               90s         Normal    Created                      pod/termination-message-container2c4973e2-9278-4bc2-bf86-610c608e2687       Created container termination-message-container\ncontainer-runtime-1074               89s         Normal    Started                      pod/termination-message-container2c4973e2-9278-4bc2-bf86-610c608e2687       Started container termination-message-container\ncontainer-runtime-8053               5m43s       Normal    Scheduled                    pod/termination-message-container532914f3-04b0-4f54-a46b-f532f010077f       Successfully assigned container-runtime-8053/termination-message-container532914f3-04b0-4f54-a46b-f532f010077f to bootstrap-e2e-minion-group-dwjn\ncontaine