This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-01-16 23:26
Elapsed1h6m
Revisionmaster
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/a84744d0-01a5-40d7-aa2a-3e80413c39fd/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/a84744d0-01a5-40d7-aa2a-3e80413c39fd/targets/test

No Test Failures!


Error lines from build-log.txt

... skipping 455 lines ...
Project: k8s-gce-soak-1-5
Network Project: k8s-gce-soak-1-5
Zone: us-west1-b
INSTANCE_GROUPS=
NODE_NAMES=
Bringing down cluster
ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
 - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.

ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
 - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.

Deleting firewall rules remaining in network bootstrap-e2e: 
W0116 23:50:54.946222  106662 loader.go:223] Config not found: /workspace/.kube/config
... skipping 146 lines ...
Trying to find master named 'bootstrap-e2e-master'
Looking for address 'bootstrap-e2e-master-ip'
Using master: bootstrap-e2e-master (external IP: 35.203.169.247; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

............Kubernetes cluster created.
Cluster "k8s-gce-soak-1-5_bootstrap-e2e" set.
User "k8s-gce-soak-1-5_bootstrap-e2e" set.
Context "k8s-gce-soak-1-5_bootstrap-e2e" created.
Switched to context "k8s-gce-soak-1-5_bootstrap-e2e".
... skipping 27 lines ...
bootstrap-e2e-master              Ready,SchedulingDisabled   <none>   10s   v1.18.0-alpha.1.836+6413f1ee2be99f
bootstrap-e2e-minion-group-6tqd   Ready                      <none>   21s   v1.18.0-alpha.1.836+6413f1ee2be99f
bootstrap-e2e-minion-group-d58v   Ready                      <none>   23s   v1.18.0-alpha.1.836+6413f1ee2be99f
bootstrap-e2e-minion-group-w9fq   Ready                      <none>   22s   v1.18.0-alpha.1.836+6413f1ee2be99f
bootstrap-e2e-minion-group-zzr9   Ready                      <none>   21s   v1.18.0-alpha.1.836+6413f1ee2be99f
Validate output:
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
Cluster validation succeeded
Done, listing cluster services:
... skipping 77 lines ...
Copying 'kube-apiserver.log kube-apiserver-audit.log kube-scheduler.log kube-controller-manager.log etcd.log etcd-events.log glbc.log cluster-autoscaler.log kube-addon-manager.log fluentd.log kubelet.cov startupscript.log' from bootstrap-e2e-master

Specify --start=47014 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts/before'
Detecting nodes in the cluster
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
... skipping 9 lines ...
Specify --start=47882 in the next get-serial-port-output invocation to get only the new output starting from here.

Specify --start=49518 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=bootstrap-e2e-minion-group
NODE_NAMES=bootstrap-e2e-minion-group-6tqd bootstrap-e2e-minion-group-d58v bootstrap-e2e-minion-group-w9fq bootstrap-e2e-minion-group-zzr9
Failures for bootstrap-e2e-minion-group (if any):
2020/01/16 23:57:44 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts/before' finished in 2m4.674278207s
2020/01/16 23:57:44 process.go:153: Running: ./hack/e2e-internal/e2e-status.sh
Project: k8s-gce-soak-1-5
... skipping 935 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver emptydir doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 205 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: nfs]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver "nfs" does not support topology - skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:96
------------------------------
... skipping 11 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver gluster doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 193 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 23:58:09.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json\"","total":-1,"completed":1,"skipped":0,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 184 lines ...
STEP: Destroying namespace "services-201" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:692

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 175 lines ...
STEP: Destroying namespace "services-6581" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:692

•
------------------------------
{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 39 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 23:58:13.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5255" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 27 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl apply
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:922
    should apply a new configuration to an existing RC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:923
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":1,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:58:15.372: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 77 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:43
    new files should be created with FSGroup ownership when container is root
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:50
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":1,"skipped":0,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:58:17.429: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 84 lines ...
• [SLOW TEST:11.316 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:58:20.363: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 36 lines ...
• [SLOW TEST:7.553 seconds]
[sig-node] RuntimeClass
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:39
  should reject a Pod requesting a RuntimeClass with an unconfigured handler
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:47
------------------------------
{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler","total":-1,"completed":2,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:58:21.535: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 23:58:21.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 89 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a container with runAsUser
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:44
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:58:27.467: INFO: Driver vsphere doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 23:58:27.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 70 lines ...
• [SLOW TEST:15.530 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:58:27.605: INFO: Driver vsphere doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 23:58:27.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 73 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1859
    should create a pod from an image when restart is Never  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:58:28.697: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 23:58:28.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 496 lines ...
• [SLOW TEST:26.186 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:93
Jan 16 23:58:35.328: INFO: Driver "nfs" does not support block volume mode - skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
... skipping 68 lines ...
• [SLOW TEST:8.524 seconds]
[sig-node] Downward API
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-windows] Windows volume mounts 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28
Jan 16 23:58:36.023: INFO: Only supported for node OS distro [windows] (not gci)
... skipping 66 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 23:58:35.335: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-620
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap that has name configmap-test-emptyKey-29cc44ce-ccb7-4bf3-9b31-cc88aedf771f
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 23:58:36.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-620" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:58:36.920: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 126 lines ...
• [SLOW TEST:30.831 seconds]
[sig-api-machinery] Servers with support for API chunking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should return chunks of results for list calls
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/chunking.go:77
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","total":-1,"completed":1,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:58:40.031: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 85 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1895
    should update a single-container pod's image  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:58:41.891: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 23:58:41.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 127 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":1,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 59 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:525
    should support exec using resource/name
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:577
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:58:45.693: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 23:58:45.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 121 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":2,"skipped":11,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 22 lines ...
• [SLOW TEST:16.964 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update PodDisruptionBudget status
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:63
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should update PodDisruptionBudget status","total":-1,"completed":1,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:58:47.041: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 23:58:47.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 119 lines ...
• [SLOW TEST:12.137 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:58:49.079: INFO: Driver cinder doesn't support ntfs -- skipping
... skipping 60 lines ...
Jan 16 23:58:20.748: INFO: Creating resource for dynamic PV
Jan 16 23:58:20.748: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(gcepd) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-expand-4376-gcepd-scrmd6r
STEP: creating a claim
STEP: Expanding non-expandable pvc
Jan 16 23:58:21.620: INFO: currentPvcSize {{5368709120 0} {<nil>} 5Gi BinarySI}, newSize {{6442450944 0} {<nil>}  BinarySI}
Jan 16 23:58:21.814: INFO: Error updating pvc gcepdqcd2h: PersistentVolumeClaim "gcepdqcd2h" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 23:58:23.985: INFO: Error updating pvc gcepdqcd2h: PersistentVolumeClaim "gcepdqcd2h" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 23:58:26.699: INFO: Error updating pvc gcepdqcd2h: PersistentVolumeClaim "gcepdqcd2h" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 23:58:28.566: INFO: Error updating pvc gcepdqcd2h: PersistentVolumeClaim "gcepdqcd2h" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 23:58:30.255: INFO: Error updating pvc gcepdqcd2h: PersistentVolumeClaim "gcepdqcd2h" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 23:58:32.480: INFO: Error updating pvc gcepdqcd2h: PersistentVolumeClaim "gcepdqcd2h" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 23:58:33.989: INFO: Error updating pvc gcepdqcd2h: PersistentVolumeClaim "gcepdqcd2h" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 23:58:35.955: INFO: Error updating pvc gcepdqcd2h: PersistentVolumeClaim "gcepdqcd2h" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 23:58:38.098: INFO: Error updating pvc gcepdqcd2h: PersistentVolumeClaim "gcepdqcd2h" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 23:58:40.291: INFO: Error updating pvc gcepdqcd2h: PersistentVolumeClaim "gcepdqcd2h" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 23:58:42.307: INFO: Error updating pvc gcepdqcd2h: PersistentVolumeClaim "gcepdqcd2h" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 23:58:43.975: INFO: Error updating pvc gcepdqcd2h: PersistentVolumeClaim "gcepdqcd2h" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 23:58:46.055: INFO: Error updating pvc gcepdqcd2h: PersistentVolumeClaim "gcepdqcd2h" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 23:58:48.088: INFO: Error updating pvc gcepdqcd2h: PersistentVolumeClaim "gcepdqcd2h" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 23:58:50.218: INFO: Error updating pvc gcepdqcd2h: PersistentVolumeClaim "gcepdqcd2h" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 23:58:52.494: INFO: Error updating pvc gcepdqcd2h: PersistentVolumeClaim "gcepdqcd2h" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 23:58:52.829: INFO: Error updating pvc gcepdqcd2h: PersistentVolumeClaim "gcepdqcd2h" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
STEP: Deleting pvc
Jan 16 23:58:52.829: INFO: Deleting PersistentVolumeClaim "gcepdqcd2h"
STEP: Deleting sc
Jan 16 23:58:53.239: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 44 lines ...
• [SLOW TEST:11.555 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:90
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":9,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 61 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:87
Jan 16 23:58:54.479: INFO: Driver "csi-hostpath" does not support topology - skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
... skipping 55 lines ...
Jan 16 23:58:49.870: INFO: Found 1 / 1
Jan 16 23:58:49.870: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 16 23:58:50.025: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 16 23:58:50.025: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 16 23:58:50.025: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.203.169.247 --kubeconfig=/workspace/.kube/config describe pod agnhost-master-jrm9r --namespace=kubectl-5864'
Jan 16 23:58:50.722: INFO: stderr: ""
Jan 16 23:58:50.722: INFO: stdout: "Name:         agnhost-master-jrm9r\nNamespace:    kubectl-5864\nPriority:     0\nNode:         bootstrap-e2e-minion-group-6tqd/10.138.0.3\nStart Time:   Thu, 16 Jan 2020 23:58:39 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  kubernetes.io/psp: e2e-test-privileged-psp\nStatus:       Running\nIP:           10.64.3.17\nIPs:\n  IP:           10.64.3.17\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://f940f99e42c6f1255aed604fe9eccb02f6bbf3eaa74ebfa57fd73a70d3a9d189\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Thu, 16 Jan 2020 23:58:46 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-x7m4w (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-x7m4w:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-x7m4w\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type     Reason       Age   From                                      Message\n  ----     ------       ----  ----                                      -------\n  Normal   Scheduled    11s   default-scheduler                         Successfully assigned kubectl-5864/agnhost-master-jrm9r to bootstrap-e2e-minion-group-6tqd\n  Warning  FailedMount  10s   kubelet, bootstrap-e2e-minion-group-6tqd  MountVolume.SetUp failed for volume \"default-token-x7m4w\" : failed to sync secret cache: timed out waiting for the condition\n  Normal   Pulled       5s    kubelet, bootstrap-e2e-minion-group-6tqd  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal   Created      5s    kubelet, bootstrap-e2e-minion-group-6tqd  Created container agnhost-master\n  Normal   Started      4s    kubelet, bootstrap-e2e-minion-group-6tqd  Started container agnhost-master\n"
Jan 16 23:58:50.722: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.203.169.247 --kubeconfig=/workspace/.kube/config describe rc agnhost-master --namespace=kubectl-5864'
Jan 16 23:58:51.804: INFO: stderr: ""
Jan 16 23:58:51.804: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-5864\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  12s   replication-controller  Created pod: agnhost-master-jrm9r\n"
Jan 16 23:58:51.804: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.203.169.247 --kubeconfig=/workspace/.kube/config describe service agnhost-master --namespace=kubectl-5864'
Jan 16 23:58:52.795: INFO: stderr: ""
Jan 16 23:58:52.795: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-5864\nLabels:            app=agnhost\n                   role=master\nAnnotations:       <none>\nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.0.227.209\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.64.3.17:6379\nSession Affinity:  None\nEvents:            <none>\n"
Jan 16 23:58:52.988: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.203.169.247 --kubeconfig=/workspace/.kube/config describe node bootstrap-e2e-master'
Jan 16 23:58:54.345: INFO: stderr: ""
Jan 16 23:58:54.345: INFO: stdout: "Name:               bootstrap-e2e-master\nRoles:              <none>\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=n1-standard-1\n                    beta.kubernetes.io/os=linux\n                    cloud.google.com/metadata-proxy-ready=true\n                    failure-domain.beta.kubernetes.io/region=us-west1\n                    failure-domain.beta.kubernetes.io/zone=us-west1-b\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=bootstrap-e2e-master\n                    kubernetes.io/os=linux\n                    node.kubernetes.io/instance-type=n1-standard-1\n                    topology.kubernetes.io/region=us-west1\n                    topology.kubernetes.io/zone=us-west1-b\nAnnotations:        node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Thu, 16 Jan 2020 23:55:12 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\n                    node.kubernetes.io/unschedulable:NoSchedule\nUnschedulable:      true\nLease:\n  HolderIdentity:  bootstrap-e2e-master\n  AcquireTime:     <unset>\n  RenewTime:       Thu, 16 Jan 2020 23:58:53 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Thu, 16 Jan 2020 23:55:26 +0000   Thu, 16 Jan 2020 23:55:26 +0000   RouteCreated                 RouteController created a route\n  MemoryPressure       False   Thu, 16 Jan 2020 23:55:43 +0000   Thu, 16 Jan 2020 23:55:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Thu, 16 Jan 2020 23:55:43 +0000   Thu, 16 Jan 2020 23:55:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Thu, 16 Jan 2020 23:55:43 +0000   Thu, 16 Jan 2020 23:55:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Thu, 16 Jan 2020 23:55:43 +0000   Thu, 16 Jan 2020 23:55:13 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   10.138.0.2\n  ExternalIP:   35.203.169.247\n  InternalDNS:  bootstrap-e2e-master.c.k8s-gce-soak-1-5.internal\n  Hostname:     bootstrap-e2e-master.c.k8s-gce-soak-1-5.internal\nCapacity:\n  attachable-volumes-gce-pd:  127\n  cpu:                        1\n  ephemeral-storage:          16293736Ki\n  hugepages-2Mi:              0\n  memory:                     3785956Ki\n  pods:                       110\nAllocatable:\n  attachable-volumes-gce-pd:  127\n  cpu:                        1\n  ephemeral-storage:          15016307073\n  hugepages-2Mi:              0\n  memory:                     3529956Ki\n  pods:                       110\nSystem Info:\n  Machine ID:                 2c171df663f8eadee6cb26812d287cc4\n  System UUID:                2c171df6-63f8-eade-e6cb-26812d287cc4\n  Boot ID:                    18f73bed-f1f9-47a1-aed9-c81ee589685d\n  Kernel Version:             4.19.76+\n  OS Image:                   Container-Optimized OS from Google\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://19.3.1\n  Kubelet Version:            v1.18.0-alpha.1.836+6413f1ee2be99f\n  Kube-Proxy Version:         v1.18.0-alpha.1.836+6413f1ee2be99f\nPodCIDR:                      10.64.5.0/24\nPodCIDRs:                     10.64.5.0/24\nProviderID:                   gce://k8s-gce-soak-1-5/us-west1-b/bootstrap-e2e-master\nNon-terminated Pods:          (10 in total)\n  Namespace                   Name                                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                            ------------  ----------  ---------------  -------------  ---\n  kube-system                 etcd-empty-dir-cleanup-bootstrap-e2e-master     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s\n  kube-system                 etcd-server-bootstrap-e2e-master                200m (20%)    0 (0%)      0 (0%)           0 (0%)         3m40s\n  kube-system                 etcd-server-events-bootstrap-e2e-master         100m (10%)    0 (0%)      0 (0%)           0 (0%)         3m40s\n  kube-system                 fluentd-gcp-v3.2.0-tz8kx                        100m (10%)    1 (100%)    200Mi (5%)       500Mi (14%)    3m23s\n  kube-system                 kube-addon-manager-bootstrap-e2e-master         5m (0%)       0 (0%)      50Mi (1%)        0 (0%)         3m41s\n  kube-system                 kube-apiserver-bootstrap-e2e-master             250m (25%)    0 (0%)      0 (0%)           0 (0%)         3m41s\n  kube-system                 kube-controller-manager-bootstrap-e2e-master    200m (20%)    0 (0%)      0 (0%)           0 (0%)         3m40s\n  kube-system                 kube-scheduler-bootstrap-e2e-master             75m (7%)      0 (0%)      0 (0%)           0 (0%)         3m40s\n  kube-system                 l7-lb-controller-bootstrap-e2e-master           10m (1%)      0 (0%)      50Mi (1%)        0 (0%)         3m39s\n  kube-system                 metadata-proxy-v0.1-8hdgh                       32m (3%)      32m (3%)    45Mi (1%)        45Mi (1%)      3m42s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                   Requests     Limits\n  --------                   --------     ------\n  cpu                        972m (97%)   1032m (103%)\n  memory                     345Mi (10%)  545Mi (15%)\n  ephemeral-storage          0 (0%)       0 (0%)\n  attachable-volumes-gce-pd  0            0\nEvents:\n  Type    Reason                   Age    From                           Message\n  ----    ------                   ----   ----                           -------\n  Normal  Starting                 3m42s  kubelet, bootstrap-e2e-master  Starting kubelet.\n  Normal  NodeHasSufficientMemory  3m42s  kubelet, bootstrap-e2e-master  Node bootstrap-e2e-master status is now: NodeHasSufficientMemory\n  Normal  NodeHasNoDiskPressure    3m42s  kubelet, bootstrap-e2e-master  Node bootstrap-e2e-master status is now: NodeHasNoDiskPressure\n  Normal  NodeHasSufficientPID     3m42s  kubelet, bootstrap-e2e-master  Node bootstrap-e2e-master status is now: NodeHasSufficientPID\n  Normal  NodeNotSchedulable       3m42s  kubelet, bootstrap-e2e-master  Node bootstrap-e2e-master status is now: NodeNotSchedulable\n  Normal  NodeAllocatableEnforced  3m41s  kubelet, bootstrap-e2e-master  Updated Node Allocatable limit across pods\n  Normal  NodeReady                3m41s  kubelet, bootstrap-e2e-master  Node bootstrap-e2e-master status is now: NodeReady\n"
... skipping 83 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":4,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [k8s.io] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 16 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command in a pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:58:56.276: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 13 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362

      Driver local doesn't support InlineVolume -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":3,"skipped":23,"failed":0}
[BeforeEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 23:58:55.352: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-6116
... skipping 16 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 23:58:57.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6116" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":4,"skipped":23,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:11.920 seconds]
[sig-auth] ServiceAccounts
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should ensure a single API token exists
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:47
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should ensure a single API token exists","total":-1,"completed":3,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 71 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume at the same time
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:243
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:58:59.331: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 37 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 23:58:59.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7485" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
... skipping 8 lines ...
[sig-storage] CSI Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver "csi-hostpath" does not support topology - skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:96
------------------------------
... skipping 74 lines ...
• [SLOW TEST:32.968 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod resolv.conf
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:455
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":2,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:59:00.592: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 39 lines ...
      Driver hostPath doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 23:58:53.488: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-311
... skipping 22 lines ...
• [SLOW TEST:7.150 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":1,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 37 lines ...
• [SLOW TEST:7.127 seconds]
[sig-apps] ReplicationController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:59:01.621: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 194 lines ...
• [SLOW TEST:51.346 seconds]
[sig-network] Networking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should check kube-proxy urls
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:148
------------------------------
{"msg":"PASSED [sig-network] Networking should check kube-proxy urls","total":-1,"completed":1,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:59:01.828: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 23:59:01.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 188 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":1,"skipped":17,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:59:02.983: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
... skipping 109 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should provision a volume and schedule a pod with AllowedTopologies
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:163
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":2,"skipped":4,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 10 lines ...
Jan 16 23:59:03.035: INFO: stderr: ""
Jan 16 23:59:03.035: INFO: stdout: "etcd-1 controller-manager scheduler etcd-0"
STEP: getting details of componentstatuses
STEP: getting status of etcd-1
Jan 16 23:59:03.035: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.203.169.247 --kubeconfig=/workspace/.kube/config get componentstatuses etcd-1'
Jan 16 23:59:03.699: INFO: stderr: ""
Jan 16 23:59:03.699: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-1   Healthy   {\"health\":\"true\"}   \n"
STEP: getting status of controller-manager
Jan 16 23:59:03.699: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.203.169.247 --kubeconfig=/workspace/.kube/config get componentstatuses controller-manager'
Jan 16 23:59:04.419: INFO: stderr: ""
Jan 16 23:59:04.419: INFO: stdout: "NAME                 STATUS    MESSAGE   ERROR\ncontroller-manager   Healthy   ok        \n"
STEP: getting status of scheduler
Jan 16 23:59:04.419: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.203.169.247 --kubeconfig=/workspace/.kube/config get componentstatuses scheduler'
Jan 16 23:59:05.178: INFO: stderr: ""
Jan 16 23:59:05.178: INFO: stdout: "NAME        STATUS    MESSAGE   ERROR\nscheduler   Healthy   ok        \n"
STEP: getting status of etcd-0
Jan 16 23:59:05.178: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.203.169.247 --kubeconfig=/workspace/.kube/config get componentstatuses etcd-0'
Jan 16 23:59:05.663: INFO: stderr: ""
Jan 16 23:59:05.663: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-0   Healthy   {\"health\":\"true\"}   \n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 23:59:05.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-284" for this suite.


... skipping 2 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl get componentstatuses
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:909
    should get componentstatuses
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:910
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":3,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:59:06.325: INFO: Driver cinder doesn't support ntfs -- skipping
... skipping 90 lines ...
• [SLOW TEST:24.337 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:59:10.061: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 15 lines ...
      Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:688
------------------------------
SS
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":32,"failed":0}
[BeforeEach] [sig-network] Firewall rule
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 23:59:01.412: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename firewall-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in firewall-test-4074
... skipping 23 lines ...
• [SLOW TEST:8.955 seconds]
[sig-network] Firewall rule
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should have correct firewall rules for e2e cluster
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:197
------------------------------
{"msg":"PASSED [sig-network] Firewall rule should have correct firewall rules for e2e cluster","total":-1,"completed":4,"skipped":32,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:59:10.380: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 107 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":2,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:59:11.301: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 23:59:11.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 97 lines ...
• [SLOW TEST:20.063 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":3,"skipped":13,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:59:13.551: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 148 lines ...
• [SLOW TEST:10.706 seconds]
[k8s.io] [sig-node] Security Context
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:117
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly]","total":-1,"completed":3,"skipped":10,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 29 lines ...
• [SLOW TEST:15.797 seconds]
[sig-api-machinery] Watchers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":4,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:59:22.139: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 63 lines ...
STEP: waiting for the service to expose an endpoint
STEP: waiting up to 3m0s for service hairpin-test in namespace services-4429 to expose endpoints map[hairpin:[8080]]
Jan 16 23:59:14.067: INFO: successfully validated that service hairpin-test in namespace services-4429 exposes endpoints map[hairpin:[8080]] (147.52734ms elapsed)
STEP: Checking if the pod can reach itself
Jan 16 23:59:15.071: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.203.169.247 --kubeconfig=/workspace/.kube/config exec --namespace=services-4429 hairpin -- /bin/sh -x -c nc -zv -t -w 2 hairpin-test 8080'
Jan 16 23:59:19.098: INFO: rc: 1
Jan 16 23:59:19.098: INFO: Service reachability failing with error: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.203.169.247 --kubeconfig=/workspace/.kube/config exec --namespace=services-4429 hairpin -- /bin/sh -x -c nc -zv -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ nc -zv -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jan 16 23:59:20.098: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.203.169.247 --kubeconfig=/workspace/.kube/config exec --namespace=services-4429 hairpin -- /bin/sh -x -c nc -zv -t -w 2 hairpin-test 8080'
Jan 16 23:59:24.083: INFO: stderr: "+ nc -zv -t -w 2 hairpin-test 8080\nConnection to hairpin-test 8080 port [tcp/http-alt] succeeded!\n"
Jan 16 23:59:24.083: INFO: stdout: ""
Jan 16 23:59:24.084: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.203.169.247 --kubeconfig=/workspace/.kube/config exec --namespace=services-4429 hairpin -- /bin/sh -x -c nc -zv -t -w 2 10.0.73.51 8080'
... skipping 10 lines ...
• [SLOW TEST:26.192 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should allow pods to hairpin back to themselves through services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:939
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":3,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:59:27.869: INFO: Driver nfs doesn't support ntfs -- skipping
... skipping 57 lines ...
• [SLOW TEST:31.500 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":3,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 38 lines ...
STEP: Creating the service on top of the pods in kubernetes
Jan 16 23:58:50.263: INFO: Service node-port-service in namespace nettest-925 found.
Jan 16 23:58:51.419: INFO: Service session-affinity-service in namespace nettest-925 found.
STEP: dialing(udp) test-container-pod --> 10.0.233.232:90
Jan 16 23:58:51.780: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.2.18:8080/dial?request=hostName&protocol=udp&host=10.0.233.232&port=90&tries=1'] Namespace:nettest-925 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 23:58:51.780: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 23:58:57.996: INFO: Tries: 10, in try: 0, stdout: {"errors":["reading from udp connection failed. err:'read udp 10.64.2.18:54389-\u003e10.0.233.232:90: i/o timeout'"]}, stderr: , command run in: (*v1.Pod)(nil)
Jan 16 23:59:00.131: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.2.18:8080/dial?request=hostName&protocol=udp&host=10.0.233.232&port=90&tries=1'] Namespace:nettest-925 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 23:59:00.131: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 23:59:00.887: INFO: Tries: 10, in try: 1, stdout: {"responses":["netserver-0"]}, stderr: , command run in: (*v1.Pod)(nil)
Jan 16 23:59:02.991: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.2.18:8080/dial?request=hostName&protocol=udp&host=10.0.233.232&port=90&tries=1'] Namespace:nettest-925 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 23:59:02.991: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 23:59:03.658: INFO: Tries: 10, in try: 2, stdout: {"responses":["netserver-0"]}, stderr: , command run in: (*v1.Pod)(nil)
... skipping 29 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  Granular Checks: Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:161
    should function for client IP based session affinity: udp [LinuxOnly]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:282
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:59:31.933: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 37 lines ...
Jan 16 23:59:17.289: INFO: Waiting for PV local-pvn7c9w to bind to PVC pvc-zsrkh
Jan 16 23:59:17.289: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-zsrkh] to have phase Bound
Jan 16 23:59:17.643: INFO: PersistentVolumeClaim pvc-zsrkh found but phase is Pending instead of Bound.
Jan 16 23:59:19.938: INFO: PersistentVolumeClaim pvc-zsrkh found and phase=Bound (2.648587309s)
Jan 16 23:59:19.938: INFO: Waiting up to 3m0s for PersistentVolume local-pvn7c9w to have phase Bound
Jan 16 23:59:20.173: INFO: PersistentVolume local-pvn7c9w found and phase=Bound (235.333343ms)
[It] should fail scheduling due to different NodeSelector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:364
STEP: local-volume-type: dir
STEP: Initializing test volumes
Jan 16 23:59:20.632: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-efaacd4c-5802-4f5d-9f53-456921af442f] Namespace:persistent-local-volumes-test-9291 PodName:hostexec-bootstrap-e2e-minion-group-6tqd-t6sqw ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jan 16 23:59:20.632: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Creating local PVCs and PVs
... skipping 23 lines ...

• [SLOW TEST:27.812 seconds]
[sig-storage] PersistentVolumes-local 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:338
    should fail scheduling due to different NodeSelector
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:364
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":3,"skipped":9,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:59:33.101: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 83 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152

      Only supported for providers [vsphere] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1383
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":2,"skipped":22,"failed":0}
[BeforeEach] [sig-apps] StatefulSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 23:58:47.858: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-6937
... skipping 130 lines ...
• [SLOW TEST:16.620 seconds]
[sig-storage] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:59:38.769: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 23:59:38.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 12 lines ...
      Driver supports dynamic provisioning, skipping InlineVolume pattern

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:688
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 23:58:18.743: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 137 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":2,"skipped":2,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:59:38.986: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 90 lines ...
• [SLOW TEST:91.739 seconds]
[sig-storage] Projected configMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 27 lines ...
Jan 16 23:59:09.019: INFO: Unable to read jessie_udp@dns-test-service.dns-5454 from pod dns-5454/dns-test-0a68c03b-8798-4964-a569-0966838de801: the server could not find the requested resource (get pods dns-test-0a68c03b-8798-4964-a569-0966838de801)
Jan 16 23:59:09.217: INFO: Unable to read jessie_tcp@dns-test-service.dns-5454 from pod dns-5454/dns-test-0a68c03b-8798-4964-a569-0966838de801: the server could not find the requested resource (get pods dns-test-0a68c03b-8798-4964-a569-0966838de801)
Jan 16 23:59:09.401: INFO: Unable to read jessie_udp@dns-test-service.dns-5454.svc from pod dns-5454/dns-test-0a68c03b-8798-4964-a569-0966838de801: the server could not find the requested resource (get pods dns-test-0a68c03b-8798-4964-a569-0966838de801)
Jan 16 23:59:09.603: INFO: Unable to read jessie_tcp@dns-test-service.dns-5454.svc from pod dns-5454/dns-test-0a68c03b-8798-4964-a569-0966838de801: the server could not find the requested resource (get pods dns-test-0a68c03b-8798-4964-a569-0966838de801)
Jan 16 23:59:09.731: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5454.svc from pod dns-5454/dns-test-0a68c03b-8798-4964-a569-0966838de801: the server could not find the requested resource (get pods dns-test-0a68c03b-8798-4964-a569-0966838de801)
Jan 16 23:59:10.011: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5454.svc from pod dns-5454/dns-test-0a68c03b-8798-4964-a569-0966838de801: the server could not find the requested resource (get pods dns-test-0a68c03b-8798-4964-a569-0966838de801)
Jan 16 23:59:11.259: INFO: Lookups using dns-5454/dns-test-0a68c03b-8798-4964-a569-0966838de801 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5454 wheezy_tcp@dns-test-service.dns-5454 wheezy_udp@dns-test-service.dns-5454.svc wheezy_tcp@dns-test-service.dns-5454.svc wheezy_udp@_http._tcp.dns-test-service.dns-5454.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5454.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5454 jessie_tcp@dns-test-service.dns-5454 jessie_udp@dns-test-service.dns-5454.svc jessie_tcp@dns-test-service.dns-5454.svc jessie_udp@_http._tcp.dns-test-service.dns-5454.svc jessie_tcp@_http._tcp.dns-test-service.dns-5454.svc]

Jan 16 23:59:16.479: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5454/dns-test-0a68c03b-8798-4964-a569-0966838de801: the server could not find the requested resource (get pods dns-test-0a68c03b-8798-4964-a569-0966838de801)
Jan 16 23:59:16.697: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5454/dns-test-0a68c03b-8798-4964-a569-0966838de801: the server could not find the requested resource (get pods dns-test-0a68c03b-8798-4964-a569-0966838de801)
Jan 16 23:59:17.282: INFO: Unable to read wheezy_udp@dns-test-service.dns-5454 from pod dns-5454/dns-test-0a68c03b-8798-4964-a569-0966838de801: the server could not find the requested resource (get pods dns-test-0a68c03b-8798-4964-a569-0966838de801)
Jan 16 23:59:17.848: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5454 from pod dns-5454/dns-test-0a68c03b-8798-4964-a569-0966838de801: the server could not find the requested resource (get pods dns-test-0a68c03b-8798-4964-a569-0966838de801)
Jan 16 23:59:18.293: INFO: Unable to read wheezy_udp@dns-test-service.dns-5454.svc from pod dns-5454/dns-test-0a68c03b-8798-4964-a569-0966838de801: the server could not find the requested resource (get pods dns-test-0a68c03b-8798-4964-a569-0966838de801)
Jan 16 23:59:18.696: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5454.svc from pod dns-5454/dns-test-0a68c03b-8798-4964-a569-0966838de801: the server could not find the requested resource (get pods dns-test-0a68c03b-8798-4964-a569-0966838de801)
Jan 16 23:59:19.097: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5454.svc from pod dns-5454/dns-test-0a68c03b-8798-4964-a569-0966838de801: the server could not find the requested resource (get pods dns-test-0a68c03b-8798-4964-a569-0966838de801)
Jan 16 23:59:19.402: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5454.svc from pod dns-5454/dns-test-0a68c03b-8798-4964-a569-0966838de801: the server could not find the requested resource (get pods dns-test-0a68c03b-8798-4964-a569-0966838de801)
Jan 16 23:59:22.475: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5454/dns-test-0a68c03b-8798-4964-a569-0966838de801: the server could not find the requested resource (get pods dns-test-0a68c03b-8798-4964-a569-0966838de801)
Jan 16 23:59:22.779: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5454/dns-test-0a68c03b-8798-4964-a569-0966838de801: the server could not find the requested resource (get pods dns-test-0a68c03b-8798-4964-a569-0966838de801)
Jan 16 23:59:23.040: INFO: Unable to read jessie_udp@dns-test-service.dns-5454 from pod dns-5454/dns-test-0a68c03b-8798-4964-a569-0966838de801: the server could not find the requested resource (get pods dns-test-0a68c03b-8798-4964-a569-0966838de801)
Jan 16 23:59:23.558: INFO: Unable to read jessie_tcp@dns-test-service.dns-5454 from pod dns-5454/dns-test-0a68c03b-8798-4964-a569-0966838de801: the server could not find the requested resource (get pods dns-test-0a68c03b-8798-4964-a569-0966838de801)
Jan 16 23:59:27.579: INFO: Lookups using dns-5454/dns-test-0a68c03b-8798-4964-a569-0966838de801 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5454 wheezy_tcp@dns-test-service.dns-5454 wheezy_udp@dns-test-service.dns-5454.svc wheezy_tcp@dns-test-service.dns-5454.svc wheezy_udp@_http._tcp.dns-test-service.dns-5454.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5454.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5454 jessie_tcp@dns-test-service.dns-5454]

Jan 16 23:59:40.326: INFO: DNS probes using dns-5454/dns-test-0a68c03b-8798-4964-a569-0966838de801 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 6 lines ...
• [SLOW TEST:92.085 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 158 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":1,"skipped":0,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:59:45.921: INFO: Driver azure-disk doesn't support ntfs -- skipping
... skipping 13 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217

      Driver azure-disk doesn't support ntfs -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:153
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present","total":-1,"completed":1,"skipped":11,"failed":0}
[BeforeEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 23:59:16.359: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-8822
... skipping 26 lines ...
• [SLOW TEST:30.925 seconds]
[k8s.io] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:59:47.289: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 51 lines ...
Jan 16 23:59:30.249: INFO: Trying to get logs from node bootstrap-e2e-minion-group-6tqd pod exec-volume-test-inlinevolume-4xx8 container exec-container-inlinevolume-4xx8: <nil>
STEP: delete the pod
Jan 16 23:59:31.858: INFO: Waiting for pod exec-volume-test-inlinevolume-4xx8 to disappear
Jan 16 23:59:32.187: INFO: Pod exec-volume-test-inlinevolume-4xx8 no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-4xx8
Jan 16 23:59:32.187: INFO: Deleting pod "exec-volume-test-inlinevolume-4xx8" in namespace "volume-2781"
Jan 16 23:59:34.014: INFO: error deleting PD "bootstrap-e2e-747ce923-cd4a-4a77-93e8-a343efa7ff0b": googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-747ce923-cd4a-4a77-93e8-a343efa7ff0b' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-6tqd', resourceInUseByAnotherResource
Jan 16 23:59:34.014: INFO: Couldn't delete PD "bootstrap-e2e-747ce923-cd4a-4a77-93e8-a343efa7ff0b", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-747ce923-cd4a-4a77-93e8-a343efa7ff0b' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-6tqd', resourceInUseByAnotherResource
Jan 16 23:59:40.297: INFO: error deleting PD "bootstrap-e2e-747ce923-cd4a-4a77-93e8-a343efa7ff0b": googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-747ce923-cd4a-4a77-93e8-a343efa7ff0b' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-6tqd', resourceInUseByAnotherResource
Jan 16 23:59:40.298: INFO: Couldn't delete PD "bootstrap-e2e-747ce923-cd4a-4a77-93e8-a343efa7ff0b", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-747ce923-cd4a-4a77-93e8-a343efa7ff0b' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-6tqd', resourceInUseByAnotherResource
Jan 16 23:59:47.649: INFO: Successfully deleted PD "bootstrap-e2e-747ce923-cd4a-4a77-93e8-a343efa7ff0b".
Jan 16 23:59:47.649: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 23:59:47.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-2781" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (ext4)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":18,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":3,"skipped":22,"failed":0}
[BeforeEach] [k8s.io] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 23:59:34.337: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-5028
... skipping 22 lines ...
• [SLOW TEST:14.814 seconds]
[k8s.io] [sig-node] Security Context
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support container.SecurityContext.RunAsUser [LinuxOnly]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:102
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":4,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:59:49.165: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 192 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should scale a replication controller  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":3,"skipped":22,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:13.298 seconds]
[sig-storage] Projected configMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:59:50.118: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 23:59:50.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 45 lines ...
• [SLOW TEST:9.410 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:59:50.296: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 23:59:50.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 68 lines ...
• [SLOW TEST:11.929 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 23:59:50.933: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 23:59:50.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 28 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 23:59:51.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-3361" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":4,"skipped":25,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Zone Support
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 167 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:525
    should contain last line of the log
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:737
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should contain last line of the log","total":-1,"completed":2,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:00:00.291: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 95 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":2,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 39 lines ...
• [SLOW TEST:14.834 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:00:00.770: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 96 lines ...
• [SLOW TEST:23.681 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":6,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:00:02.461: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:00:02.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 49 lines ...
[BeforeEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:688
[It] should serve a basic endpoint from pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service endpoint-test2 in namespace services-5391
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5391 to expose endpoints map[]
Jan 16 23:59:50.917: INFO: Get endpoints failed (186.48981ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan 16 23:59:51.983: INFO: successfully validated that service endpoint-test2 in namespace services-5391 exposes endpoints map[] (1.253094421s elapsed)
STEP: Creating pod pod1 in namespace services-5391
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5391 to expose endpoints map[pod1:[80]]
Jan 16 23:59:57.563: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (5.503547398s elapsed, will retry)
Jan 17 00:00:00.681: INFO: successfully validated that service endpoint-test2 in namespace services-5391 exposes endpoints map[pod1:[80]] (8.621874618s elapsed)
STEP: Creating pod pod2 in namespace services-5391
... skipping 16 lines ...
• [SLOW TEST:19.613 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":5,"skipped":25,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:00:08.791: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 154 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:00:08.826: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:00:08.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 113 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":7,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:00:09.015: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 119 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":2,"skipped":1,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:00:12.467: INFO: Only supported for providers [vsphere] (not gce)
... skipping 113 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":4,"skipped":19,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:00:18.755: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 89 lines ...
• [SLOW TEST:9.973 seconds]
[sig-storage] Projected secret
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":44,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:00:18.827: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 109 lines ...
      Driver vsphere doesn't support ntfs -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:153
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0}
[BeforeEach] [sig-storage] vsphere statefulset
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:00:17.124: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename vsphere-statefulset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in vsphere-statefulset-7507
... skipping 149 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should resize volume when PVC is edited while pod is using it
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:220
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":4,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:00:19.154: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 54 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Only supported for providers [azure] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1512
------------------------------
... skipping 87 lines ...
• [SLOW TEST:44.929 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:855
------------------------------
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":4,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:8.883 seconds]
[sig-storage] Projected configMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":60,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:00:27.743: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 108 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":10,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:00:28.068: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 89 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":5,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 87 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:121
    when invoking the Recycle reclaim policy
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:264
      should test that a PV becomes Available and is clean after the PVC is deleted.
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:282
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","total":-1,"completed":5,"skipped":36,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:00:30.972: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 72 lines ...
• [SLOW TEST:8.788 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":4,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:00:36.863: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 74 lines ...
• [SLOW TEST:8.721 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":24,"failed":0}
[BeforeEach] [sig-scheduling] Multi-AZ Cluster Volumes [sig-storage]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:00:38.424: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename multi-az
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in multi-az-2635
... skipping 218 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (ext3)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ext3)] volumes should store data","total":-1,"completed":2,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:00:43.421: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:00:43.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 112 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume at the same time
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:243
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:00:43.586: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 134 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":5,"skipped":26,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 104 lines ...
      Only supported for providers [azure] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1512
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":8,"skipped":68,"failed":0}
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:00:41.454: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-7148
... skipping 22 lines ...
• [SLOW TEST:8.413 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":68,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 60 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume one after the other
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":28,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 77 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":5,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:00:53.109: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 107 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":3,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:00:59.337: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:00:59.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 77 lines ...
Jan 17 00:00:23.448: INFO: creating *v1.StatefulSet: csi-mock-volumes-9275/csi-mockplugin
Jan 17 00:00:23.751: INFO: creating *v1beta1.CSIDriver: csi-mock-csi-mock-volumes-9275
Jan 17 00:00:23.931: INFO: creating *v1.StatefulSet: csi-mock-volumes-9275/csi-mockplugin-attacher
Jan 17 00:00:24.096: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9275"
STEP: Creating pod
STEP: checking for CSIInlineVolumes feature
Jan 17 00:00:35.391: INFO: Error getting logs for pod csi-inline-volume-lbf9r: the server rejected our request for an unknown reason (get pods csi-inline-volume-lbf9r)
STEP: Deleting pod csi-inline-volume-lbf9r in namespace csi-mock-volumes-9275
STEP: Deleting the previously created pod
Jan 17 00:00:42.419: INFO: Deleting pod "pvc-volume-tester-cd2n4" in namespace "csi-mock-volumes-9275"
Jan 17 00:00:42.850: INFO: Wait up to 5m0s for pod "pvc-volume-tester-cd2n4" to be fully deleted
STEP: Checking CSI driver logs
Jan 17 00:00:51.431: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9275","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9275","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9275","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-9275","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"csi-d886b70a7eefa66e03dfc3beba0a1c4065f355335645ff51007b9db2fbc5ae74","target_path":"/var/lib/kubelet/pods/5473a072-b8c8-4793-92b5-01f2eb4bea50/volumes/kubernetes.io~csi/my-volume/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"true","csi.storage.k8s.io/pod.name":"pvc-volume-tester-cd2n4","csi.storage.k8s.io/pod.namespace":"csi-mock-volumes-9275","csi.storage.k8s.io/pod.uid":"5473a072-b8c8-4793-92b5-01f2eb4bea50","csi.storage.k8s.io/serviceAccount.name":"default"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-d886b70a7eefa66e03dfc3beba0a1c4065f355335645ff51007b9db2fbc5ae74","target_path":"/var/lib/kubelet/pods/5473a072-b8c8-4793-92b5-01f2eb4bea50/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":""}

Jan 17 00:00:51.431: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-9275
Jan 17 00:00:51.431: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 5473a072-b8c8-4793-92b5-01f2eb4bea50
Jan 17 00:00:51.431: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true
Jan 17 00:00:51.431: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Jan 17 00:00:51.431: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-cd2n4
... skipping 38 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:296
    contain ephemeral=true when using inline volume
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:346
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":5,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:01:00.639: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 135 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":3,"skipped":29,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:01:01.253: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 83 lines ...
• [SLOW TEST:18.174 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":5,"skipped":25,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:01:03.134: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 83 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":25,"failed":0}
[BeforeEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:00:57.163: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1023
... skipping 118 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  Granular Checks: Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:161
    should be able to handle large requests: http
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:299
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should be able to handle large requests: http","total":-1,"completed":3,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:01:09.497: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:01:09.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 219 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":5,"skipped":20,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:01:12.249: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 104 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox Pod with hostAliases
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:01:15.854: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 142 lines ...
• [SLOW TEST:15.932 seconds]
[k8s.io] [sig-node] Security Context
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:68
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":6,"skipped":53,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:01:16.601: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 6 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 184 lines ...
  Only supported for providers [vsphere] (not gce)

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere/vsphere_zone_support.go:106
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":4,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:00:00.781: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 44 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":5,"skipped":15,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:01:22.086: INFO: Driver local doesn't support ext4 -- skipping
... skipping 93 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and write from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:233
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":6,"skipped":32,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 24 lines ...
• [SLOW TEST:13.661 seconds]
[k8s.io] [sig-node] Security Context
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:88
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]","total":-1,"completed":4,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:01:23.253: INFO: Driver azure-disk doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:01:23.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 45 lines ...
• [SLOW TEST:12.640 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":33,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:01:24.961: INFO: Only supported for providers [aws] (not gce)
... skipping 37 lines ...
      Driver supports dynamic provisioning, skipping InlineVolume pattern

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:688
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":15,"failed":0}
[BeforeEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:00:32.775: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-5366
... skipping 125 lines ...
• [SLOW TEST:122.655 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to up and down services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:968
------------------------------
{"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":4,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:01:30.545: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:01:30.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 34 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":2,"skipped":18,"failed":0}
[BeforeEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:01:18.078: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7996
... skipping 21 lines ...
• [SLOW TEST:12.725 seconds]
[sig-storage] Projected configMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:57
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":18,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-auth] Metadata Concealment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 44 lines ...
• [SLOW TEST:14.613 seconds]
[k8s.io] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be updated [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":63,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 24 lines ...
• [SLOW TEST:14.307 seconds]
[sig-node] Downward API
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:01:36.432: INFO: Only supported for providers [aws] (not gce)
... skipping 13 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152

      Only supported for providers [aws] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1645
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":15,"failed":0}
[BeforeEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:01:26.895: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nettest-5669
... skipping 19 lines ...
• [SLOW TEST:10.569 seconds]
[sig-network] Networking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide unchanging, static URL paths for kubernetes api services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:122
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":6,"skipped":15,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:01:37.470: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 109 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 23:59:44.878: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 86 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":2,"skipped":10,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 120 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":6,"skipped":30,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
... skipping 48 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      Verify if offline PVC expansion works
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:162
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":7,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 140 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  Granular Checks: Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:161
    should function for endpoint-Service: udp
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:208
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp","total":-1,"completed":2,"skipped":29,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 55 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":10,"skipped":69,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:01:49.554: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:01:49.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 187 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":5,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:01:49.914: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:01:49.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 10 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377

      Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:688
------------------------------
{"msg":"PASSED [sig-auth] Metadata Concealment should run a check-metadata-concealment job to completion","total":-1,"completed":5,"skipped":52,"failed":0}
[BeforeEach] [k8s.io] [sig-node] Events
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:01:31.013: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename events
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-1964
... skipping 20 lines ...
• [SLOW TEST:19.322 seconds]
[k8s.io] [sig-node] Events
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":6,"skipped":52,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":7,"skipped":30,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:01:38.814: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6601
... skipping 24 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run CronJob
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1822
    should create a CronJob
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1835
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run CronJob should create a CronJob","total":-1,"completed":8,"skipped":30,"failed":0}

SSSSS
------------------------------
[BeforeEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:16.539 seconds]
[k8s.io] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":21,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":4,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:01:09.283: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 45 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":5,"skipped":34,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:01:54.879: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 71 lines ...
STEP: Deleting the previously created pod
Jan 17 00:01:04.695: INFO: Deleting pod "pvc-volume-tester-hxsgs" in namespace "csi-mock-volumes-9828"
Jan 17 00:01:04.863: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hxsgs" to be fully deleted
STEP: Checking CSI driver logs
Jan 17 00:01:23.667: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9828","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9828","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9828","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-f9d72f01-7919-4b08-8399-8272afb87bcc","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-f9d72f01-7919-4b08-8399-8272afb87bcc"}}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-9828","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-9828","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-f9d72f01-7919-4b08-8399-8272afb87bcc","storage.kubernetes.io/csiProvisionerIdentity":"1579219237744-8081-csi-mock-csi-mock-volumes-9828"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f9d72f01-7919-4b08-8399-8272afb87bcc/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-f9d72f01-7919-4b08-8399-8272afb87bcc","storage.kubernetes.io/csiProvisionerIdentity":"1579219237744-8081-csi-mock-csi-mock-volumes-9828"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f9d72f01-7919-4b08-8399-8272afb87bcc/globalmount","target_path":"/var/lib/kubelet/pods/56826bac-e9f4-4b63-b409-6808f6a4c907/volumes/kubernetes.io~csi/pvc-f9d72f01-7919-4b08-8399-8272afb87bcc/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-f9d72f01-7919-4b08-8399-8272afb87bcc","storage.kubernetes.io/csiProvisionerIdentity":"1579219237744-8081-csi-mock-csi-mock-volumes-9828"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"4","volume_path":"/var/lib/kubelet/pods/56826bac-e9f4-4b63-b409-6808f6a4c907/volumes/kubernetes.io~csi/pvc-f9d72f01-7919-4b08-8399-8272afb87bcc/mount"},"Response":{"usage":[{"total":1073741824,"unit":1}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/56826bac-e9f4-4b63-b409-6808f6a4c907/volumes/kubernetes.io~csi/pvc-f9d72f01-7919-4b08-8399-8272afb87bcc/mount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f9d72f01-7919-4b08-8399-8272afb87bcc/globalmount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerUnpublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-9828"},"Response":{},"Error":""}

Jan 17 00:01:23.668: INFO: Found NodeUnpublishVolume: {Method:/csi.v1.Node/NodeUnpublishVolume Request:{VolumeContext:map[]}}
STEP: Deleting pod pvc-volume-tester-hxsgs
Jan 17 00:01:23.668: INFO: Deleting pod "pvc-volume-tester-hxsgs" in namespace "csi-mock-volumes-9828"
STEP: Deleting claim pvc-ck82v
Jan 17 00:01:24.607: INFO: Waiting up to 2m0s for PersistentVolume pvc-f9d72f01-7919-4b08-8399-8272afb87bcc to get deleted
... skipping 38 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:296
    should not be passed when podInfoOnMount=false
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:346
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":5,"skipped":26,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:01:55.569: INFO: Only supported for providers [azure] (not gce)
... skipping 166 lines ...
• [SLOW TEST:72.408 seconds]
[sig-storage] Mounted volume expand
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Should verify mounted devices can be resized
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:115
------------------------------
{"msg":"PASSED [sig-storage] Mounted volume expand Should verify mounted devices can be resized","total":-1,"completed":3,"skipped":36,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:01:55.854: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 207 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] provisioning
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should provision storage with pvc data source
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:214
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source","total":-1,"completed":3,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:01:56.644: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:01:56.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 104 lines ...
• [SLOW TEST:18.228 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  custom resource defaulting for requests and from storage works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":7,"skipped":32,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:01:57.078: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 6 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 116 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":6,"skipped":26,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:01:15.835: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-2806
... skipping 56 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and write from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:233
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":7,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 15 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":7,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:02:04.385: INFO: Distro gci doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:02:04.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 140 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should not deadlock when a pod's predecessor fails
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:244
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails","total":-1,"completed":3,"skipped":7,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:02:05.398: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 145 lines ...
• [SLOW TEST:16.967 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:02:09.952: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:02:09.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 173 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:419
    should expand volume without restarting pod if nodeExpansion=off
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:448
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":-1,"completed":2,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:02:10.329: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:02:10.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 81 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":5,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:02:13.555: INFO: Driver nfs doesn't support ext4 -- skipping
... skipping 86 lines ...
• [SLOW TEST:18.990 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":42,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":25,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:01:04.841: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in csi-mock-volumes-9758
... skipping 85 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:240
    should require VolumeAttach for drivers with attachment
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:262
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":6,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:02:16.710: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:02:16.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 179 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should resize volume when PVC is edited while pod is using it
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:220
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":6,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:02:16.949: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:02:16.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 176 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:525
    should support exec
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:537
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec","total":-1,"completed":7,"skipped":31,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 23 lines ...
• [SLOW TEST:21.230 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:87
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":4,"skipped":45,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] HostPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 28 lines ...
• [SLOW TEST:20.023 seconds]
[sig-storage] HostPath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should support r/w [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:65
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":8,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:02:17.248: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:02:17.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 68 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":4,"skipped":35,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:02:19.488: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:02:19.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 104 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":6,"skipped":45,"failed":0}

SSS
------------------------------
[BeforeEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 21 lines ...
• [SLOW TEST:257.117 seconds]
[k8s.io] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:167
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance]","total":-1,"completed":1,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:02:26.341: INFO: Driver gluster doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:02:26.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 108 lines ...
• [SLOW TEST:17.021 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:457
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage]","total":-1,"completed":4,"skipped":18,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 29 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1731
    should create a deployment from an image  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":-1,"completed":7,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:02:27.672: INFO: Driver cinder doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:02:27.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 135 lines ...
• [SLOW TEST:32.150 seconds]
[k8s.io] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a local redirect http liveness probe
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:232
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":6,"skipped":43,"failed":0}

SSSS
------------------------------
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:34
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 5 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:63
[It] should support unsafe sysctls which are actually whitelisted
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:110
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 51 lines ...
• [SLOW TEST:38.186 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment reaping should cascade to its replica sets and pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods","total":-1,"completed":9,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:02:29.175: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 52 lines ...
• [SLOW TEST:13.410 seconds]
[sig-auth] PodSecurityPolicy
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should enforce the restricted policy.PodSecurityPolicy
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:85
------------------------------
{"msg":"PASSED [sig-auth] PodSecurityPolicy should enforce the restricted policy.PodSecurityPolicy","total":-1,"completed":7,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 69 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume one after the other
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:02:34.146: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 58 lines ...
• [SLOW TEST:10.856 seconds]
[k8s.io] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":22,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 17 lines ...
Jan 17 00:02:27.519: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1601.svc.cluster.local from pod dns-1601/dns-test-a150d343-1eda-42e0-9956-b9403d94c12a: the server could not find the requested resource (get pods dns-test-a150d343-1eda-42e0-9956-b9403d94c12a)
Jan 17 00:02:27.643: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1601.svc.cluster.local from pod dns-1601/dns-test-a150d343-1eda-42e0-9956-b9403d94c12a: the server could not find the requested resource (get pods dns-test-a150d343-1eda-42e0-9956-b9403d94c12a)
Jan 17 00:02:28.043: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1601.svc.cluster.local from pod dns-1601/dns-test-a150d343-1eda-42e0-9956-b9403d94c12a: the server could not find the requested resource (get pods dns-test-a150d343-1eda-42e0-9956-b9403d94c12a)
Jan 17 00:02:28.183: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1601.svc.cluster.local from pod dns-1601/dns-test-a150d343-1eda-42e0-9956-b9403d94c12a: the server could not find the requested resource (get pods dns-test-a150d343-1eda-42e0-9956-b9403d94c12a)
Jan 17 00:02:28.351: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1601.svc.cluster.local from pod dns-1601/dns-test-a150d343-1eda-42e0-9956-b9403d94c12a: the server could not find the requested resource (get pods dns-test-a150d343-1eda-42e0-9956-b9403d94c12a)
Jan 17 00:02:28.555: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1601.svc.cluster.local from pod dns-1601/dns-test-a150d343-1eda-42e0-9956-b9403d94c12a: the server could not find the requested resource (get pods dns-test-a150d343-1eda-42e0-9956-b9403d94c12a)
Jan 17 00:02:28.995: INFO: Lookups using dns-1601/dns-test-a150d343-1eda-42e0-9956-b9403d94c12a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1601.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1601.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1601.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1601.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1601.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1601.svc.cluster.local jessie_udp@dns-test-service-2.dns-1601.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1601.svc.cluster.local]

Jan 17 00:02:39.223: INFO: DNS probes using dns-1601/dns-test-a150d343-1eda-42e0-9956-b9403d94c12a succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 5 lines ...
• [SLOW TEST:51.184 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":6,"skipped":25,"failed":0}

SSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] AppArmor
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 24 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  load AppArmor profiles
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31
    should enforce an AppArmor profile
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile","total":-1,"completed":2,"skipped":36,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:02:43.429: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 99 lines ...
• [SLOW TEST:26.421 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":5,"skipped":47,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:02:43.547: INFO: Driver emptydir doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:02:43.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 88 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when creating containers with AllowPrivilegeEscalation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:290
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:02:45.992: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:02:45.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 3 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver emptydir doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 44 lines ...
Jan 17 00:02:35.031: INFO: Pod exec-volume-test-preprovisionedpv-qbsd no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-qbsd
Jan 17 00:02:35.031: INFO: Deleting pod "exec-volume-test-preprovisionedpv-qbsd" in namespace "volume-4540"
STEP: Deleting pv and pvc
Jan 17 00:02:35.283: INFO: Deleting PersistentVolumeClaim "pvc-bvrjb"
Jan 17 00:02:35.647: INFO: Deleting PersistentVolume "gcepd-cpq2r"
Jan 17 00:02:37.487: INFO: error deleting PD "bootstrap-e2e-0d05269c-518d-436e-a0f9-591a2b4e7242": googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-0d05269c-518d-436e-a0f9-591a2b4e7242' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-zzr9', resourceInUseByAnotherResource
Jan 17 00:02:37.487: INFO: Couldn't delete PD "bootstrap-e2e-0d05269c-518d-436e-a0f9-591a2b4e7242", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-0d05269c-518d-436e-a0f9-591a2b4e7242' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-zzr9', resourceInUseByAnotherResource
Jan 17 00:02:44.895: INFO: Successfully deleted PD "bootstrap-e2e-0d05269c-518d-436e-a0f9-591a2b4e7242".
Jan 17 00:02:44.895: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:02:44.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-4540" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":36,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:02:46.287: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 233 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  Granular Checks: Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:161
    should update endpoints: udp
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:228
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should update endpoints: udp","total":-1,"completed":4,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:02:46.408: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 54 lines ...
• [SLOW TEST:11.109 seconds]
[sig-storage] HostPath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:02:52.245: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:02:52.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 143 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":7,"skipped":40,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:28.160 seconds]
[k8s.io] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support pod readiness gates [NodeFeature:PodReadinessGate]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:789
------------------------------
{"msg":"PASSED [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":7,"skipped":47,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:02:56.007: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 100 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":11,"skipped":78,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:93
Jan 17 00:02:56.539: INFO: Driver "nfs" does not support volume expansion - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
... skipping 47 lines ...
• [SLOW TEST:14.566 seconds]
[sig-api-machinery] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":44,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Generated clientset
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 52 lines ...
• [SLOW TEST:8.233 seconds]
[sig-storage] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":48,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:03:03.182: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 62 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":3,"skipped":14,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 49 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":37,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 75 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":9,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:03:05.898: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:03:05.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 79 lines ...
Jan 17 00:02:23.212: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-q2j4j] to have phase Bound
Jan 17 00:02:23.483: INFO: PersistentVolumeClaim pvc-q2j4j found but phase is Pending instead of Bound.
Jan 17 00:02:25.719: INFO: PersistentVolumeClaim pvc-q2j4j found and phase=Bound (2.506998242s)
Jan 17 00:02:25.719: INFO: Waiting up to 3m0s for PersistentVolume gce-wdsfx to have phase Bound
Jan 17 00:02:26.039: INFO: PersistentVolume gce-wdsfx found and phase=Bound (320.043598ms)
STEP: Creating the Client Pod
[It] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:139
STEP: Deleting the Persistent Volume
Jan 17 00:02:43.655: INFO: Deleting PersistentVolume "gce-wdsfx"
STEP: Deleting the client pod
Jan 17 00:02:44.487: INFO: Deleting pod "pvc-tester-8k9j4" in namespace "pv-9750"
Jan 17 00:02:45.587: INFO: Wait up to 5m0s for pod "pvc-tester-8k9j4" to be fully deleted
... skipping 14 lines ...
Jan 17 00:03:08.185: INFO: Successfully deleted PD "bootstrap-e2e-4dca7569-269d-4d35-b80d-7297351d8a65".


• [SLOW TEST:51.138 seconds]
[sig-storage] PersistentVolumes GCEPD
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:139
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes GCEPD should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach","total":-1,"completed":8,"skipped":33,"failed":0}

SSS
------------------------------
[BeforeEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 10 lines ...
Jan 17 00:02:56.787: INFO: Pod "busybox-readonly-true-1bfac432-05e6-43c8-af05-88b831ffc1b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239501593s
Jan 17 00:02:58.999: INFO: Pod "busybox-readonly-true-1bfac432-05e6-43c8-af05-88b831ffc1b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.451887396s
Jan 17 00:03:01.243: INFO: Pod "busybox-readonly-true-1bfac432-05e6-43c8-af05-88b831ffc1b0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.695871598s
Jan 17 00:03:03.399: INFO: Pod "busybox-readonly-true-1bfac432-05e6-43c8-af05-88b831ffc1b0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.851818631s
Jan 17 00:03:05.517: INFO: Pod "busybox-readonly-true-1bfac432-05e6-43c8-af05-88b831ffc1b0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.96941743s
Jan 17 00:03:07.730: INFO: Pod "busybox-readonly-true-1bfac432-05e6-43c8-af05-88b831ffc1b0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.183117095s
Jan 17 00:03:09.913: INFO: Pod "busybox-readonly-true-1bfac432-05e6-43c8-af05-88b831ffc1b0": Phase="Failed", Reason="", readiness=false. Elapsed: 15.366081005s
Jan 17 00:03:09.913: INFO: Pod "busybox-readonly-true-1bfac432-05e6-43c8-af05-88b831ffc1b0" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:03:09.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-723" for this suite.

... skipping 3 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with readOnlyRootFilesystem
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:165
    should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:211
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":8,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:03:10.718: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 204 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command that always fails in a pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":79,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:03:15.290: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 6 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 32 lines ...
• [SLOW TEST:20.892 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":12,"skipped":46,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:03:21.474: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 114 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":53,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:03:25.844: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:03:25.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 26 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:63
[It] should support sysctls
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:67
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:21.416 seconds]
[k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support sysctls
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:67
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls","total":-1,"completed":6,"skipped":42,"failed":0}

SSSSS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":7,"skipped":70,"failed":0}
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:03:14.411: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1000
... skipping 24 lines ...
• [SLOW TEST:11.737 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":70,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:03:26.158: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 39 lines ...
• [SLOW TEST:17.998 seconds]
[sig-storage] EmptyDir wrapper volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":9,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:03:26.203: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 154 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":8,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:03:27.340: INFO: Driver gluster doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:03:27.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 156 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and read from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":9,"skipped":53,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:03:28.991: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 36 lines ...
STEP: Destroying namespace "services-9908" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:692

•
------------------------------
{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces","total":-1,"completed":9,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:03:30.775: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:03:30.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 58 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 23:58:11.244: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nettest-2146
... skipping 234 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  Granular Checks: Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:161
    should update endpoints: http
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:217
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should update endpoints: http","total":-1,"completed":2,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:03:35.167: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:03:35.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 106 lines ...
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-67k4r webserver-deployment-c7997dcc8- deployment-6078 /api/v1/namespaces/deployment-6078/pods/webserver-deployment-c7997dcc8-67k4r 5413a2de-cb4c-4cba-8d21-1cb76393ba3d 11461 0 2020-01-17 00:03:30 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e9faf90f-f8e7-4466-bd73-bce0458031df 0xc00370cea0 0xc00370cea1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9twb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9twb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9twb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-6tqd,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.3,PodIP:,StartTime:2020-01-17 00:03:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 17 00:03:34.747: INFO: Pod "webserver-deployment-c7997dcc8-9w4ht" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9w4ht webserver-deployment-c7997dcc8- deployment-6078 /api/v1/namespaces/deployment-6078/pods/webserver-deployment-c7997dcc8-9w4ht 66db793f-629e-49e9-a7f7-2a6479a59a0f 11394 0 2020-01-17 00:03:30 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e9faf90f-f8e7-4466-bd73-bce0458031df 0xc00370d000 0xc00370d001}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9twb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9twb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9twb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-d58v,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 17 00:03:34.747: INFO: Pod "webserver-deployment-c7997dcc8-cvt94" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cvt94 webserver-deployment-c7997dcc8- deployment-6078 /api/v1/namespaces/deployment-6078/pods/webserver-deployment-c7997dcc8-cvt94 9c9d63cd-bb2f-4fab-893c-3dabb862ff96 11435 0 2020-01-17 00:03:30 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e9faf90f-f8e7-4466-bd73-bce0458031df 0xc00370d110 0xc00370d111}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9twb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9twb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9twb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-zzr9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.6,PodIP:,StartTime:2020-01-17 00:03:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 17 00:03:34.748: INFO: Pod "webserver-deployment-c7997dcc8-d7p48" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-d7p48 webserver-deployment-c7997dcc8- deployment-6078 /api/v1/namespaces/deployment-6078/pods/webserver-deployment-c7997dcc8-d7p48 3c8c7cd9-78c2-4cb9-837b-5c2496b60bb7 11416 0 2020-01-17 00:03:26 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e9faf90f-f8e7-4466-bd73-bce0458031df 0xc00370d270 0xc00370d271}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9twb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9twb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9twb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-zzr9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.6,PodIP:10.64.2.104,StartTime:2020-01-17 00:03:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: pull access denied for webserver, repository does not exist or may require 'docker login': denied: requested access to the resource is denied,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.2.104,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 17 00:03:34.751: INFO: Pod "webserver-deployment-c7997dcc8-dhtxv" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dhtxv webserver-deployment-c7997dcc8- deployment-6078 /api/v1/namespaces/deployment-6078/pods/webserver-deployment-c7997dcc8-dhtxv 478ccf3a-cc7c-4be9-8de2-9cc7fb48c057 11443 0 2020-01-17 00:03:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e9faf90f-f8e7-4466-bd73-bce0458031df 0xc00370d400 0xc00370d401}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9twb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9twb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9twb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-zzr9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.6,PodIP:10.64.2.103,StartTime:2020-01-17 00:03:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: pull access denied for webserver, repository does not exist or may require 'docker login': denied: requested access to the resource is denied,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.2.103,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 17 00:03:34.754: INFO: Pod "webserver-deployment-c7997dcc8-m8v4v" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-m8v4v webserver-deployment-c7997dcc8- deployment-6078 /api/v1/namespaces/deployment-6078/pods/webserver-deployment-c7997dcc8-m8v4v 95c81dc7-c38a-4d28-96cc-ce55325c1b1b 11476 0 2020-01-17 00:03:31 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e9faf90f-f8e7-4466-bd73-bce0458031df 0xc00370d590 0xc00370d591}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9twb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9twb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9twb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-6tqd,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.3,PodIP:,StartTime:2020-01-17 00:03:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 17 00:03:34.754: INFO: Pod "webserver-deployment-c7997dcc8-mj9sd" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mj9sd webserver-deployment-c7997dcc8- deployment-6078 /api/v1/namespaces/deployment-6078/pods/webserver-deployment-c7997dcc8-mj9sd 58bb4609-75c4-4e7d-a30e-57052e772424 11484 0 2020-01-17 00:03:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e9faf90f-f8e7-4466-bd73-bce0458031df 0xc00370d6f0 0xc00370d6f1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9twb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9twb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9twb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-zzr9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.6,PodIP:10.64.2.102,StartTime:2020-01-17 00:03:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: pull access denied for webserver, repository does not exist or may require 'docker login': denied: requested access to the resource is denied,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.2.102,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 17 00:03:34.755: INFO: Pod "webserver-deployment-c7997dcc8-nck5r" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nck5r webserver-deployment-c7997dcc8- deployment-6078 /api/v1/namespaces/deployment-6078/pods/webserver-deployment-c7997dcc8-nck5r e1be2964-6397-4e83-88ad-9356130f10fc 11457 0 2020-01-17 00:03:30 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e9faf90f-f8e7-4466-bd73-bce0458031df 0xc00370d880 0xc00370d881}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9twb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9twb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9twb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-w9fq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.4,PodIP:,StartTime:2020-01-17 00:03:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 17 00:03:34.755: INFO: Pod "webserver-deployment-c7997dcc8-rcsvj" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rcsvj webserver-deployment-c7997dcc8- deployment-6078 /api/v1/namespaces/deployment-6078/pods/webserver-deployment-c7997dcc8-rcsvj a656bf0e-6447-44a0-933d-3c6ff79bbe91 11470 0 2020-01-17 00:03:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e9faf90f-f8e7-4466-bd73-bce0458031df 0xc00370d9e0 0xc00370d9e1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9twb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9twb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9twb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-w9fq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.4,PodIP:10.64.1.104,StartTime:2020-01-17 00:03:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: pull access denied for webserver, repository does not exist or may require 'docker login': denied: requested access to the resource is denied,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.1.104,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 17 00:03:34.755: INFO: Pod "webserver-deployment-c7997dcc8-zl6pn" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zl6pn webserver-deployment-c7997dcc8- deployment-6078 /api/v1/namespaces/deployment-6078/pods/webserver-deployment-c7997dcc8-zl6pn a724bb10-49ef-4b3a-bbc4-27ea8b6a4448 11436 0 2020-01-17 00:03:30 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e9faf90f-f8e7-4466-bd73-bce0458031df 0xc00370db70 0xc00370db71}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9twb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9twb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9twb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-w9fq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 00:03:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.4,PodIP:,StartTime:2020-01-17 00:03:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:03:34.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6078" for this suite.
... skipping 80 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":9,"skipped":50,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 20 lines ...
STEP: Scaling down replication controller to zero
STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-4906 to 0
STEP: Update service to not tolerate unready services
STEP: Check if pod is unreachable
Jan 17 00:03:14.995: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.203.169.247 --kubeconfig=/workspace/.kube/config exec --namespace=services-4906 execpod-wwlql -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-4906.svc.cluster.local:80/; test "$?" -ne "0"'
Jan 17 00:03:17.530: INFO: rc: 1
Jan 17 00:03:17.530: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.203.169.247 --kubeconfig=/workspace/.kube/config exec --namespace=services-4906 execpod-wwlql -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-4906.svc.cluster.local:80/; test "$?" -ne "0":
Command stdout:
NOW: 2020-01-17 00:03:17.296210605 +0000 UTC m=+31.898595500
stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-4906.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1

error:
exit status 1
Jan 17 00:03:19.531: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.203.169.247 --kubeconfig=/workspace/.kube/config exec --namespace=services-4906 execpod-wwlql -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-4906.svc.cluster.local:80/; test "$?" -ne "0"'
Jan 17 00:03:22.807: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-4906.svc.cluster.local:80/\n+ test 7 -ne 0\n"
Jan 17 00:03:22.807: INFO: stdout: ""
STEP: Update service to tolerate unready services again
STEP: Check if terminating pod is available through service
Jan 17 00:03:23.518: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.203.169.247 --kubeconfig=/workspace/.kube/config exec --namespace=services-4906 execpod-wwlql -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-4906.svc.cluster.local:80/'
Jan 17 00:03:26.645: INFO: rc: 7
Jan 17 00:03:26.645: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.203.169.247 --kubeconfig=/workspace/.kube/config exec --namespace=services-4906 execpod-wwlql -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-4906.svc.cluster.local:80/:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-4906.svc.cluster.local:80/
command terminated with exit code 7

error:
exit status 7
Jan 17 00:03:28.646: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.203.169.247 --kubeconfig=/workspace/.kube/config exec --namespace=services-4906 execpod-wwlql -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-4906.svc.cluster.local:80/'
Jan 17 00:03:33.591: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-4906.svc.cluster.local:80/\n"
Jan 17 00:03:33.591: INFO: stdout: "NOW: 2020-01-17 00:03:32.28859133 +0000 UTC m=+46.890976222"
STEP: Remove pods immediately
STEP: stopping RC slow-terminating-unready-pod in namespace services-4906
... skipping 9 lines ...
• [SLOW TEST:60.472 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should create endpoints for unready pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1936
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":6,"skipped":25,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:03:38.204: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 75 lines ...
• [SLOW TEST:52.307 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove pods when job is deleted
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:73
------------------------------
{"msg":"PASSED [sig-apps] Job should remove pods when job is deleted","total":-1,"completed":4,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:03:38.608: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:03:38.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 138 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: inline ephemeral CSI volume] ephemeral
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support multiple inline ephemeral volumes
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:177
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":7,"skipped":46,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:03:39.285: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 6 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-scheduling] PreemptionExecutionPath runs ReplicaSets to verify preemption running path","total":-1,"completed":8,"skipped":45,"failed":0}
[BeforeEach] [sig-storage] Zone Support
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:03:38.245: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename zone-support
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in zone-support-1289
... skipping 90 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:63
[It] should not launch unsafe, but not explicitly enabled sysctls on the node
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:188
STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node
STEP: Watching for error events or started pod
STEP: Checking that the pod was rejected
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:03:45.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-7675" for this suite.


• [SLOW TEST:6.420 seconds]
[k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should not launch unsafe, but not explicitly enabled sysctls on the node
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:188
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node","total":-1,"completed":8,"skipped":50,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:03:45.766: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 89 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":8,"skipped":57,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:03:48.552: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:03:48.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 108 lines ...
• [SLOW TEST:104.235 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":47,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 72 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 61 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":9,"skipped":37,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:03:50.962: INFO: Only supported for providers [openstack] (not gce)
... skipping 174 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] provisioning
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should provision storage with mount options
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:173
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":6,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:03:52.962: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:03:52.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 10 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440

      Driver nfs doesn't support ntfs -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:153
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":9,"skipped":48,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:26.211 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-auth] PodSecurityPolicy should allow pods under the privileged policy.PodSecurityPolicy","total":-1,"completed":3,"skipped":13,"failed":0}
[BeforeEach] [sig-apps] ReplicaSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:03:40.318: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-1299
... skipping 15 lines ...
• [SLOW TEST:21.076 seconds]
[sig-apps] ReplicaSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":4,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:04:01.407: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:04:01.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 45 lines ...
• [SLOW TEST:13.160 seconds]
[sig-node] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":44,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:04:04.146: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 86 lines ...
• [SLOW TEST:11.245 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":13,"skipped":82,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:03:53.999: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-8939
... skipping 21 lines ...
• [SLOW TEST:10.239 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":51,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:04:04.240: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:04:04.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 10 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":82,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:04:04.247: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:04:04.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 117 lines ...
• [SLOW TEST:14.639 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":71,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:04:06.057: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 102 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":10,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:04:06.653: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:04:06.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 90 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":74,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:04:11.268: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 57 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392

      Driver hostPath doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod","total":-1,"completed":5,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:03:01.420: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 70 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":6,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:04:11.454: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 42 lines ...
• [SLOW TEST:5.633 seconds]
[sig-api-machinery] Servers with support for Table transformation
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should return pod details
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:51
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return pod details","total":-1,"completed":11,"skipped":34,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:04:12.304: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 46 lines ...
• [SLOW TEST:159.949 seconds]
[k8s.io] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":65,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:04:14.212: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 42 lines ...
• [SLOW TEST:40.076 seconds]
[k8s.io] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 80 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":10,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:04:17.108: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:04:17.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 3 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver hostPath doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 83 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":49,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:04:21.306: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 63 lines ...
      Only supported for providers [azure] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1512
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":4,"skipped":17,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes GCEPD
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:03:35.368: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename pv
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pv-8470
... skipping 9 lines ...
Jan 17 00:03:41.731: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-stzvb] to have phase Bound
Jan 17 00:03:41.835: INFO: PersistentVolumeClaim pvc-stzvb found but phase is Pending instead of Bound.
Jan 17 00:03:43.996: INFO: PersistentVolumeClaim pvc-stzvb found and phase=Bound (2.264478037s)
Jan 17 00:03:43.996: INFO: Waiting up to 3m0s for PersistentVolume gce-8nhpx to have phase Bound
Jan 17 00:03:44.211: INFO: PersistentVolume gce-8nhpx found and phase=Bound (215.479591ms)
STEP: Creating the Client Pod
[It] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:124
STEP: Deleting the Claim
Jan 17 00:04:03.335: INFO: Deleting PersistentVolumeClaim "pvc-stzvb"
STEP: Deleting the Pod
Jan 17 00:04:03.726: INFO: Deleting pod "pvc-tester-lksq9" in namespace "pv-8470"
Jan 17 00:04:03.847: INFO: Wait up to 5m0s for pod "pvc-tester-lksq9" to be fully deleted
... skipping 7 lines ...
[AfterEach] [sig-storage] PersistentVolumes GCEPD
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:108
Jan 17 00:04:13.012: INFO: AfterEach: Cleaning up test resources
Jan 17 00:04:13.012: INFO: Deleting pod "pvc-tester-lksq9" in namespace "pv-8470"
Jan 17 00:04:13.360: INFO: Deleting PersistentVolumeClaim "pvc-stzvb"
Jan 17 00:04:13.635: INFO: Deleting PersistentVolume "gce-8nhpx"
Jan 17 00:04:15.427: INFO: error deleting PD "bootstrap-e2e-c4e1925a-0399-404d-9275-3397b5e52f62": googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-c4e1925a-0399-404d-9275-3397b5e52f62' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-6tqd', resourceInUseByAnotherResource
Jan 17 00:04:15.427: INFO: Couldn't delete PD "bootstrap-e2e-c4e1925a-0399-404d-9275-3397b5e52f62", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-c4e1925a-0399-404d-9275-3397b5e52f62' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-6tqd', resourceInUseByAnotherResource
Jan 17 00:04:22.804: INFO: Successfully deleted PD "bootstrap-e2e-c4e1925a-0399-404d-9275-3397b5e52f62".


• [SLOW TEST:47.436 seconds]
[sig-storage] PersistentVolumes GCEPD
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:124
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes GCEPD should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach","total":-1,"completed":5,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:04:22.806: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:04:22.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 47 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786
    should create a job from an image when restart is OnFailure  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure  [Conformance]","total":-1,"completed":4,"skipped":7,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:04:22.820: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 72 lines ...
Jan 17 00:03:53.744: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.203.169.247 --kubeconfig=/workspace/.kube/config exec gcepd-client --namespace=volume-3793 -- grep  /opt/0  /proc/mounts'
Jan 17 00:03:55.347: INFO: stderr: ""
Jan 17 00:03:55.347: INFO: stdout: "/dev/sdb /opt/0 ext4 rw,relatime 0 0\n"
STEP: cleaning the environment after gcepd
Jan 17 00:03:55.347: INFO: Deleting pod "gcepd-client" in namespace "volume-3793"
Jan 17 00:03:55.748: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
Jan 17 00:04:07.763: INFO: error deleting PD "bootstrap-e2e-0abd6273-de24-4e35-a307-237b84f82f4e": googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-0abd6273-de24-4e35-a307-237b84f82f4e' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-6tqd', resourceInUseByAnotherResource
Jan 17 00:04:07.763: INFO: Couldn't delete PD "bootstrap-e2e-0abd6273-de24-4e35-a307-237b84f82f4e", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-0abd6273-de24-4e35-a307-237b84f82f4e' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-6tqd', resourceInUseByAnotherResource
Jan 17 00:04:14.111: INFO: error deleting PD "bootstrap-e2e-0abd6273-de24-4e35-a307-237b84f82f4e": googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-0abd6273-de24-4e35-a307-237b84f82f4e' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-6tqd', resourceInUseByAnotherResource
Jan 17 00:04:14.111: INFO: Couldn't delete PD "bootstrap-e2e-0abd6273-de24-4e35-a307-237b84f82f4e", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-0abd6273-de24-4e35-a307-237b84f82f4e' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-6tqd', resourceInUseByAnotherResource
Jan 17 00:04:21.556: INFO: Successfully deleted PD "bootstrap-e2e-0abd6273-de24-4e35-a307-237b84f82f4e".
Jan 17 00:04:21.556: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:04:21.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-3793" for this suite.
... skipping 8 lines ...
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ext4)] volumes should store data","total":-1,"completed":6,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:04:22.828: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:04:22.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 255 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: inline ephemeral CSI volume] ephemeral
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should create read-only inline ephemeral volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:116
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":8,"skipped":49,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:04:29.730: INFO: Only supported for providers [azure] (not gce)
... skipping 50 lines ...
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should contain custom columns for each resource
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:425
Jan 17 00:03:24.695: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.203.169.247 --kubeconfig=/workspace/.kube/config get pods --all-namespaces'
Jan 17 00:03:25.541: INFO: stderr: ""
Jan 17 00:03:25.541: INFO: stdout: "NAMESPACE                            NAME                                                         READY   STATUS              RESTARTS   AGE\napparmor-3326                        apparmor-loader-ppf7s                                        1/1     Running             0          57s\napparmor-3326                        test-apparmor-jnb74                                          0/1     Completed           0          48s\nconfigmap-8878                       pod-configmaps-f4f499c1-85b4-4d81-a3c6-71cb919052c0          3/3     Running             0          76s\ncontainer-probe-2757                 liveness-e08dd933-6e02-4e45-87e3-4377bc33f3c8                0/1     CrashLoopBackOff    4          107s\ncontainer-probe-6927                 test-webserver-8e67e2ac-abf0-40b5-9dbf-ddf574efffce          1/1     Running             0          3m43s\ncontainers-483                       client-containers-eeb4d8f1-c41b-4025-9f38-433b5bfe60f7       1/1     Running             0          57s\ncsi-mock-volumes-5764                csi-mockplugin-0                                             3/3     Running             0          85s\ncsi-mock-volumes-5764                csi-mockplugin-attacher-0                                    1/1     Running             0          84s\ncsi-mock-volumes-5764                csi-mockplugin-resizer-0                                     1/1     Running             0          85s\ncsi-mock-volumes-5764                pvc-volume-tester-bpjns                                      1/1     Running             0          75s\ncsi-mock-volumes-9568                csi-mockplugin-0                                             3/3     Running             0          99s\ncsi-mock-volumes-9568                csi-mockplugin-attacher-0                                    1/1     Running             0          98s\ndefault                              recycler-for-nfs-nff4f                                       1/1     Running             0          3m7s\ndeployment-6078                      webserver-deployment-595b5b9587-4227m                        1/1     Running             0          19s\ndeployment-6078                      webserver-deployment-595b5b9587-7jp6h                        1/1     Running             0          20s\ndeployment-6078                      webserver-deployment-595b5b9587-9697g                        1/1     Running             0          20s\ndeployment-6078                      webserver-deployment-595b5b9587-bqfdc                        1/1     Running             0          20s\ndeployment-6078                      webserver-deployment-595b5b9587-fncww                        1/1     Running             0          20s\ndeployment-6078                      webserver-deployment-595b5b9587-gb87n                        1/1     Running             0          20s\ndeployment-6078                      webserver-deployment-595b5b9587-j5czc                        1/1     Running             0          20s\ndeployment-6078                      webserver-deployment-595b5b9587-jqdfh                        1/1     Running             0          19s\ndeployment-6078                      webserver-deployment-595b5b9587-p2jgg                        1/1     Running             0          20s\ndeployment-6078                      webserver-deployment-595b5b9587-sqrvx                        1/1     Running             0          19s\nemptydir-wrapper-6520                pod-secrets-3ca7552a-1a6b-4ed9-85f4-f0ce8f39b2fe             1/1     Running             0          14s\nephemeral-1724                       csi-hostpath-attacher-0                                      1/1     Running             0          42s\nephemeral-1724                       csi-hostpath-provisioner-0                                   1/1     Running             0          42s\nephemeral-1724                       csi-hostpath-resizer-0                                       1/1     Running             0          43s\nephemeral-1724                       csi-hostpathplugin-0                                         3/3     Running             0          44s\nephemeral-1724                       csi-snapshotter-0                                            1/1     Running             0          43s\nephemeral-1724                       inline-volume-tester-nx984                                   1/1     Running             0          43s\nephemeral-1724                       inline-volume-tester2-khg5b                                  1/1     Terminating         0          20s\nephemeral-6080                       csi-hostpath-attacher-0                                      1/1     Running             0          20s\nephemeral-6080                       csi-hostpath-provisioner-0                                   1/1     Running             0          20s\nephemeral-6080                       csi-hostpath-resizer-0                                       1/1     Running             0          21s\nephemeral-6080                       csi-hostpathplugin-0                                         0/3     ContainerCreating   0          22s\nephemeral-6080                       csi-snapshotter-0                                            1/1     Running             0          21s\nephemeral-6080                       inline-volume-tester-jpm9w                                   1/1     Running             0          21s\nephemeral-726                        csi-hostpath-attacher-0                                      1/1     Running             0          58s\nephemeral-726                        csi-hostpath-provisioner-0                                   1/1     Running             0          58s\nephemeral-726                        csi-hostpath-resizer-0                                       1/1     Running             0          59s\nephemeral-726                        csi-hostpathplugin-0                                         3/3     Running             0          61s\nephemeral-726                        csi-snapshotter-0                                            1/1     Running             0          59s\njob-2435                             all-pods-removed-g9rzm                                       1/1     Terminating         0          36s\njob-2435                             all-pods-removed-pv5sk                                       1/1     Terminating         0          36s\nkube-system                          coredns-65567c7b57-5ngxt                                     1/1     Running             0          8m35s\nkube-system                          coredns-65567c7b57-vfjh7                                     1/1     Running             0          8m2s\nkube-system                          etcd-empty-dir-cleanup-bootstrap-e2e-master                  1/1     Running             0          8m11s\nkube-system                          etcd-server-bootstrap-e2e-master                             1/1     Running             0          8m11s\nkube-system                          etcd-server-events-bootstrap-e2e-master                      1/1     Running             0          8m11s\nkube-system                          event-exporter-v0.3.1-747b47fcd-h9n4q                        2/2     Running             0          8m38s\nkube-system                          fluentd-gcp-scaler-76d9c77b4d-x9pzf                          1/1     Running             0          8m31s\nkube-system                          fluentd-gcp-v3.2.0-6cvd6                                     2/2     Running             0          7m\nkube-system                          fluentd-gcp-v3.2.0-g8dmm                                     2/2     Running             0          7m5s\nkube-system                          fluentd-gcp-v3.2.0-nrd82                                     2/2     Running             0          7m26s\nkube-system                          fluentd-gcp-v3.2.0-tz8kx                                     2/2     Running             0          7m54s\nkube-system                          fluentd-gcp-v3.2.0-x9nbv                                     2/2     Running             0          7m14s\nkube-system                          kube-addon-manager-bootstrap-e2e-master                      1/1     Running             0          8m12s\nkube-system                          kube-apiserver-bootstrap-e2e-master                          1/1     Running             0          8m12s\nkube-system                          kube-controller-manager-bootstrap-e2e-master                 1/1     Running             0          8m11s\nkube-system                          kube-dns-autoscaler-65bc6d4889-7kfcx                         1/1     Running             0          8m27s\nkube-system                          kube-proxy-bootstrap-e2e-minion-group-6tqd                   1/1     Running             0          8m24s\nkube-system                          kube-proxy-bootstrap-e2e-minion-group-d58v                   1/1     Running             0          8m25s\nkube-system                          kube-proxy-bootstrap-e2e-minion-group-w9fq                   1/1     Running             0          8m25s\nkube-system                          kube-proxy-bootstrap-e2e-minion-group-zzr9                   1/1     Running             0          8m24s\nkube-system                          kube-scheduler-bootstrap-e2e-master                          1/1     Running             0          8m11s\nkube-system                          kubernetes-dashboard-7778f8b456-rw5w4                        1/1     Running             0          8m31s\nkube-system                          l7-default-backend-678889f899-zxxsk                          1/1     Running             0          8m34s\nkube-system                          l7-lb-controller-bootstrap-e2e-master                        1/1     Running             2          8m10s\nkube-system                          metadata-proxy-v0.1-8hdgh                                    2/2     Running             0          8m13s\nkube-system                          metadata-proxy-v0.1-jf7n6                                    2/2     Running             0          8m25s\nkube-system                          metadata-proxy-v0.1-tjqp5                                    2/2     Running             0          8m26s\nkube-system                          metadata-proxy-v0.1-x8f9w                                    2/2     Running             0          8m24s\nkube-system                          metadata-proxy-v0.1-xhfwp                                    2/2     Running             0          8m24s\nkube-system                          metrics-server-v0.3.6-5f859c87d6-7rfvc                       2/2     Running             0          7m51s\nkube-system                          volume-snapshot-controller-0                                 1/1     Running             0          8m25s\nkubectl-522                          e2e-test-httpd-deployment-594dddd44f-wm77v                   1/1     Terminating         0          61s\nkubectl-7411                         pod1dzwngmfhc7                                               0/1     Pending             0          1s\nkubelet-test-7287                    bin-falsee39eff4e-1cc1-4b24-93b1-8081a6c1c366                0/1     Error               0          28s\nnettest-1552                         netserver-1                                                  1/1     Running             0          4m54s\nnettest-1552                         netserver-2                                                  1/1     Running             0          4m54s\nnettest-1552                         netserver-3                                                  1/1     Running             0          4m54s\nnettest-1552                         test-container-pod                                           1/1     Running             0          4m30s\nnettest-2146                         netserver-1                                                  1/1     Running             0          5m10s\nnettest-2146                         netserver-2                                                  1/1     Running             0          5m10s\nnettest-2146                         netserver-3                                                  1/1     Running             0          5m10s\nnettest-2146                         test-container-pod                                           1/1     Running             0          4m44s\npersistent-local-volumes-test-2314   hostexec-bootstrap-e2e-minion-group-6tqd-qgwk2               1/1     Running             0          9s\npersistent-local-volumes-test-5039   hostexec-bootstrap-e2e-minion-group-6tqd-vs4dp               1/1     Running             0          110s\npersistent-local-volumes-test-8906   hostexec-bootstrap-e2e-minion-group-6tqd-4x72w               1/1     Running             0          20s\npersistent-local-volumes-test-8906   security-context-7e20886c-66a5-4655-8255-5fcd164360f6        1/1     Running             0          8s\npods-5144                            pod-ready                                                    1/1     Running             0          56s\npods-9828                            pod-logs-websocket-dc87754c-15f4-4d3c-b5d7-985a6ebcd852      1/1     Terminating         0          104s\npodsecuritypolicy-4050               allowed                                                      1/1     Running             0          66s\npodsecuritypolicy-607                apparmor                                                     1/1     Running             0          49s\npodsecuritypolicy-607                hostipc                                                      1/1     Running             0          58s\npodsecuritypolicy-607                hostnet                                                      1/1     Running             0          70s\npodsecuritypolicy-607                hostpath                                                     1/1     Running             0          84s\npodsecuritypolicy-607                hostpid                                                      1/1     Running             0          64s\npodsecuritypolicy-607                privileged                                                   1/1     Running             0          96s\npodsecuritypolicy-607                runasgroup                                                   0/1     ContainerCreating   0          16s\npodsecuritypolicy-607                seccomp                                                      1/1     Running             0          39s\npodsecuritypolicy-607                sysadmin                                                     1/1     Running             0          30s\nprojected-1000                       downwardapi-volume-4574ef3f-531f-4259-b321-0d6ec6db641f      0/1     Terminating         0          10s\nprovisioning-2093                    gluster-server                                               0/1     Terminating         0          59s\nprovisioning-2251                    csi-hostpathplugin-0                                         0/3     Terminating         0          71s\nprovisioning-2251                    csi-snapshotter-0                                            0/1     Terminating         0          69s\nprovisioning-3649                    external-provisioner-r6zzc                                   1/1     Running             0          48s\nprovisioning-3649                    pvc-volume-tester-reader-khx8k                               0/1     ContainerCreating   0          11s\nprovisioning-5007                    hostpath-symlink-prep-provisioning-5007                      0/1     Completed           0          9s\nprovisioning-532                     gluster-server                                               1/1     Running             0          18s\nprovisioning-9405                    hostexec-bootstrap-e2e-minion-group-d58v-dr2h2               1/1     Running             0          31s\nprovisioning-9405                    pod-subpath-test-preprovisionedpv-869x                       0/1     Init:0/2            0          11s\nsched-preemption-path-5056           pod4                                                         0/1     Pending             0          18s\nsched-preemption-path-5056           rs-pod1-kzgzn                                                0/1     Pending             0          17s\nsched-preemption-path-5056           rs-pod2-tf8vj                                                0/1     Pending             0          17s\nsched-preemption-path-5056           rs-pod3-7zjmk                                                1/1     Running             0          25s\nsecurity-context-test-431            alpine-nnp-false-243a7f50-7260-4028-8f1f-71d78e547b8f        0/1     Completed           0          52s\nsecurity-context-test-723            busybox-readonly-true-1bfac432-05e6-43c8-af05-88b831ffc1b0   0/1     Error               0          31s\nservices-2515                        execpod7dfjh                                                 1/1     Running             0          19s\nservices-2515                        externalname-service-9kz8f                                   1/1     Running             0          22s\nservices-2515                        externalname-service-h25tx                                   1/1     Running             0          22s\nservices-4906                        execpod-wwlql                                                1/1     Running             0          34s\nservices-4906                        slow-terminating-unready-pod-klqkd                           0/1     Terminating         0          45s\nsysctl-1255                          sysctl-497fb630-456d-411b-bb02-dd430e1bc51d                  0/1     Completed           0          66s\nsysctl-8370                          sysctl-53498c3d-d83d-420d-a7a1-544d0af63cab                  0/1     Completed           0          19s\nvolume-3106                          hostexec-bootstrap-e2e-minion-group-d58v-82wnd               1/1     Running             0          75s\nvolume-3793                          gcepd-client                                                 0/1     ContainerCreating   0          20s\nvolumemode-5518                      gluster-server                                               1/1     Running             0          10s\nvolumemode-8010                      hostexec-bootstrap-e2e-minion-group-d58v-ss69n               1/1     Running             0          48s\nvolumemode-8010                      security-context-60668457-80cb-467b-99ff-bb82d58250d8        1/1     Terminating         0          24s\n"
Jan 17 00:03:26.095: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.203.169.247 --kubeconfig=/workspace/.kube/config get events --all-namespaces'
Jan 17 00:03:27.395: INFO: stderr: ""
Jan 17 00:03:27.395: INFO: stdout: "NAMESPACE                            LAST SEEN   TYPE      REASON                     OBJECT                                                           MESSAGE\napparmor-3326                        58s         Normal    Scheduled                  pod/apparmor-loader-ppf7s                                        Successfully assigned apparmor-3326/apparmor-loader-ppf7s to bootstrap-e2e-minion-group-w9fq\napparmor-3326                        56s         Normal    Pulling                    pod/apparmor-loader-ppf7s                                        Pulling image \"gcr.io/kubernetes-e2e-test-images/apparmor-loader:1.0\"\napparmor-3326                        53s         Normal    Pulled                     pod/apparmor-loader-ppf7s                                        Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/apparmor-loader:1.0\"\napparmor-3326                        53s         Normal    Created                    pod/apparmor-loader-ppf7s                                        Created container apparmor-loader\napparmor-3326                        53s         Normal    Started                    pod/apparmor-loader-ppf7s                                        Started container apparmor-loader\napparmor-3326                        58s         Normal    SuccessfulCreate           replicationcontroller/apparmor-loader                            Created pod: apparmor-loader-ppf7s\napparmor-3326                        49s         Normal    Scheduled                  pod/test-apparmor-jnb74                                          Successfully assigned apparmor-3326/test-apparmor-jnb74 to bootstrap-e2e-minion-group-w9fq\napparmor-3326                        47s         Normal    Pulled                     pod/test-apparmor-jnb74                                          Container image \"docker.io/library/busybox:1.29\" already present on machine\napparmor-3326                        47s         Normal    Created                    pod/test-apparmor-jnb74                                          Created container test\napparmor-3326                        47s         Normal    Started                    pod/test-apparmor-jnb74                                          Started container test\nclientset-3493                       32s         Normal    Scheduled                  pod/pod34072a5a-c697-4116-8105-5ab8c24eb1dc                      Successfully assigned clientset-3493/pod34072a5a-c697-4116-8105-5ab8c24eb1dc to bootstrap-e2e-minion-group-w9fq\nclientset-3493                       30s         Normal    Pulled                     pod/pod34072a5a-c697-4116-8105-5ab8c24eb1dc                      Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nclientset-3493                       29s         Normal    Created                    pod/pod34072a5a-c697-4116-8105-5ab8c24eb1dc                      Created container nginx\nclientset-3493                       29s         Normal    Started                    pod/pod34072a5a-c697-4116-8105-5ab8c24eb1dc                      Started container nginx\nclientset-3493                       26s         Normal    Killing                    pod/pod34072a5a-c697-4116-8105-5ab8c24eb1dc                      Stopping container nginx\nconfigmap-8878                       76s         Normal    Scheduled                  pod/pod-configmaps-f4f499c1-85b4-4d81-a3c6-71cb919052c0          Successfully assigned configmap-8878/pod-configmaps-f4f499c1-85b4-4d81-a3c6-71cb919052c0 to bootstrap-e2e-minion-group-6tqd\nconfigmap-8878                       73s         Normal    Pulled                     pod/pod-configmaps-f4f499c1-85b4-4d81-a3c6-71cb919052c0          Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nconfigmap-8878                       72s         Normal    Created                    pod/pod-configmaps-f4f499c1-85b4-4d81-a3c6-71cb919052c0          Created container delcm-volume-test\nconfigmap-8878                       72s         Normal    Started                    pod/pod-configmaps-f4f499c1-85b4-4d81-a3c6-71cb919052c0          Started container delcm-volume-test\nconfigmap-8878                       72s         Normal    Pulled                     pod/pod-configmaps-f4f499c1-85b4-4d81-a3c6-71cb919052c0          Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nconfigmap-8878                       72s         Normal    Created                    pod/pod-configmaps-f4f499c1-85b4-4d81-a3c6-71cb919052c0          Created container updcm-volume-test\nconfigmap-8878                       71s         Normal    Started                    pod/pod-configmaps-f4f499c1-85b4-4d81-a3c6-71cb919052c0          Started container updcm-volume-test\nconfigmap-8878                       71s         Normal    Pulled                     pod/pod-configmaps-f4f499c1-85b4-4d81-a3c6-71cb919052c0          Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nconfigmap-8878                       71s         Normal    Created                    pod/pod-configmaps-f4f499c1-85b4-4d81-a3c6-71cb919052c0          Created container createcm-volume-test\nconfigmap-8878                       70s         Normal    Started                    pod/pod-configmaps-f4f499c1-85b4-4d81-a3c6-71cb919052c0          Started container createcm-volume-test\ncontainer-probe-251                  85s         Normal    Scheduled                  pod/liveness-1fa80131-a9ef-4c46-b878-caebcec03de1                Successfully assigned container-probe-251/liveness-1fa80131-a9ef-4c46-b878-caebcec03de1 to bootstrap-e2e-minion-group-w9fq\ncontainer-probe-251                  84s         Warning   FailedMount                pod/liveness-1fa80131-a9ef-4c46-b878-caebcec03de1                MountVolume.SetUp failed for volume \"default-token-kr7j2\" : failed to sync secret cache: timed out waiting for the condition\ncontainer-probe-251                  63s         Normal    Pulled                     pod/liveness-1fa80131-a9ef-4c46-b878-caebcec03de1                Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ncontainer-probe-251                  63s         Normal    Created                    pod/liveness-1fa80131-a9ef-4c46-b878-caebcec03de1                Created container liveness\ncontainer-probe-251                  62s         Normal    Started                    pod/liveness-1fa80131-a9ef-4c46-b878-caebcec03de1                Started container liveness\ncontainer-probe-251                  63s         Warning   Unhealthy                  pod/liveness-1fa80131-a9ef-4c46-b878-caebcec03de1                Liveness probe failed: HTTP probe failed with statuscode: 500\ncontainer-probe-251                  63s         Normal    Killing                    pod/liveness-1fa80131-a9ef-4c46-b878-caebcec03de1                Container liveness failed liveness probe, will be restarted\ncontainer-probe-251                  59s         Normal    Killing                    pod/liveness-1fa80131-a9ef-4c46-b878-caebcec03de1                Stopping container liveness\ncontainer-probe-2757                 108s        Normal    Scheduled                  pod/liveness-e08dd933-6e02-4e45-87e3-4377bc33f3c8                Successfully assigned container-probe-2757/liveness-e08dd933-6e02-4e45-87e3-4377bc33f3c8 to bootstrap-e2e-minion-group-w9fq\ncontainer-probe-2757                 27s         Normal    Pulled                     pod/liveness-e08dd933-6e02-4e45-87e3-4377bc33f3c8                Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ncontainer-probe-2757                 27s         Normal    Created                    pod/liveness-e08dd933-6e02-4e45-87e3-4377bc33f3c8                Created container liveness\ncontainer-probe-2757                 26s         Normal    Started                    pod/liveness-e08dd933-6e02-4e45-87e3-4377bc33f3c8                Started container liveness\ncontainer-probe-2757                 7s          Warning   Unhealthy                  pod/liveness-e08dd933-6e02-4e45-87e3-4377bc33f3c8                Liveness probe failed: HTTP probe failed with statuscode: 500\ncontainer-probe-2757                 7s          Normal    Killing                    pod/liveness-e08dd933-6e02-4e45-87e3-4377bc33f3c8                Container liveness failed liveness probe, will be restarted\ncontainer-probe-6927                 3m44s       Normal    Scheduled                  pod/test-webserver-8e67e2ac-abf0-40b5-9dbf-ddf574efffce          Successfully assigned container-probe-6927/test-webserver-8e67e2ac-abf0-40b5-9dbf-ddf574efffce to bootstrap-e2e-minion-group-zzr9\ncontainer-probe-6927                 3m40s       Normal    Pulling                    pod/test-webserver-8e67e2ac-abf0-40b5-9dbf-ddf574efffce          Pulling image \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\"\ncontainer-probe-6927                 3m39s       Normal    Pulled                     pod/test-webserver-8e67e2ac-abf0-40b5-9dbf-ddf574efffce          Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\"\ncontainer-probe-6927                 3m39s       Normal    Created                    pod/test-webserver-8e67e2ac-abf0-40b5-9dbf-ddf574efffce          Created container test-webserver\ncontainer-probe-6927                 3m39s       Normal    Started                    pod/test-webserver-8e67e2ac-abf0-40b5-9dbf-ddf574efffce          Started container test-webserver\ncontainer-probe-7035                 61s         Normal    Killing                    pod/liveness-f6d4439d-6121-476c-8143-55125d82dd86                Stopping container liveness\ncontainers-483                       58s         Normal    Scheduled                  pod/client-containers-eeb4d8f1-c41b-4025-9f38-433b5bfe60f7       Successfully assigned containers-483/client-containers-eeb4d8f1-c41b-4025-9f38-433b5bfe60f7 to bootstrap-e2e-minion-group-w9fq\ncontainers-483                       56s         Normal    Pulled                     pod/client-containers-eeb4d8f1-c41b-4025-9f38-433b5bfe60f7       Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ncontainers-483                       56s         Normal    Created                    pod/client-containers-eeb4d8f1-c41b-4025-9f38-433b5bfe60f7       Created container test-container\ncontainers-483                       56s         Normal    Started                    pod/client-containers-eeb4d8f1-c41b-4025-9f38-433b5bfe60f7       Started container test-container\ncrd-webhook-1280                     63s         Normal    Scheduled                  pod/sample-crd-conversion-webhook-deployment-78dcf5dd84-c8sbt    Successfully assigned crd-webhook-1280/sample-crd-conversion-webhook-deployment-78dcf5dd84-c8sbt to bootstrap-e2e-minion-group-w9fq\ncrd-webhook-1280                     61s         Normal    Pulled                     pod/sample-crd-conversion-webhook-deployment-78dcf5dd84-c8sbt    Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ncrd-webhook-1280                     61s         Normal    Created                    pod/sample-crd-conversion-webhook-deployment-78dcf5dd84-c8sbt    Created container sample-crd-conversion-webhook\ncrd-webhook-1280                     61s         Normal    Started                    pod/sample-crd-conversion-webhook-deployment-78dcf5dd84-c8sbt    Started container sample-crd-conversion-webhook\ncrd-webhook-1280                     63s         Normal    SuccessfulCreate           replicaset/sample-crd-conversion-webhook-deployment-78dcf5dd84   Created pod: sample-crd-conversion-webhook-deployment-78dcf5dd84-c8sbt\ncrd-webhook-1280                     64s         Normal    ScalingReplicaSet          deployment/sample-crd-conversion-webhook-deployment              Scaled up replica set sample-crd-conversion-webhook-deployment-78dcf5dd84 to 1\ncsi-mock-volumes-5764                82s         Normal    Pulled                     pod/csi-mockplugin-0                                             Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\ncsi-mock-volumes-5764                82s         Normal    Created                    pod/csi-mockplugin-0                                             Created container csi-provisioner\ncsi-mock-volumes-5764                82s         Normal    Started                    pod/csi-mockplugin-0                                             Started container csi-provisioner\ncsi-mock-volumes-5764                82s         Normal    Pulled                     pod/csi-mockplugin-0                                             Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\ncsi-mock-volumes-5764                82s         Normal    Created                    pod/csi-mockplugin-0                                             Created container driver-registrar\ncsi-mock-volumes-5764                80s         Normal    Started                    pod/csi-mockplugin-0                                             Started container driver-registrar\ncsi-mock-volumes-5764                80s         Normal    Pulled                     pod/csi-mockplugin-0                                             Container image \"quay.io/k8scsi/mock-driver:v2.1.0\" already present on machine\ncsi-mock-volumes-5764                80s         Normal    Created                    pod/csi-mockplugin-0                                             Created container mock\ncsi-mock-volumes-5764                80s         Normal    Started                    pod/csi-mockplugin-0                                             Started container mock\ncsi-mock-volumes-5764                82s         Normal    Pulled                     pod/csi-mockplugin-attacher-0                                    Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\ncsi-mock-volumes-5764                82s         Normal    Created                    pod/csi-mockplugin-attacher-0                                    Created container csi-attacher\ncsi-mock-volumes-5764                81s         Normal    Started                    pod/csi-mockplugin-attacher-0                                    Started container csi-attacher\ncsi-mock-volumes-5764                85s         Normal    SuccessfulCreate           statefulset/csi-mockplugin-attacher                              create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\ncsi-mock-volumes-5764                82s         Normal    Pulled                     pod/csi-mockplugin-resizer-0                                     Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\ncsi-mock-volumes-5764                82s         Normal    Created                    pod/csi-mockplugin-resizer-0                                     Created container csi-resizer\ncsi-mock-volumes-5764                81s         Normal    Started                    pod/csi-mockplugin-resizer-0                                     Started container csi-resizer\ncsi-mock-volumes-5764                86s         Normal    SuccessfulCreate           statefulset/csi-mockplugin-resizer                               create Pod csi-mockplugin-resizer-0 in StatefulSet csi-mockplugin-resizer successful\ncsi-mock-volumes-5764                86s         Normal    SuccessfulCreate           statefulset/csi-mockplugin                                       create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-5764                75s         Normal    SuccessfulAttachVolume     pod/pvc-volume-tester-bpjns                                      AttachVolume.Attach succeeded for volume \"pvc-3cfd9af1-461a-4c78-9ed9-b85353a3391b\"\ncsi-mock-volumes-5764                67s         Normal    Pulled                     pod/pvc-volume-tester-bpjns                                      Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-5764                67s         Normal    Created                    pod/pvc-volume-tester-bpjns                                      Created container volume-tester\ncsi-mock-volumes-5764                64s         Normal    Started                    pod/pvc-volume-tester-bpjns                                      Started container volume-tester\ncsi-mock-volumes-5764                84s         Normal    ExternalProvisioning       persistentvolumeclaim/pvc-w767b                                  waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-5764\" or manually created by system administrator\ncsi-mock-volumes-5764                78s         Normal    Provisioning               persistentvolumeclaim/pvc-w767b                                  External provisioner is provisioning volume for claim \"csi-mock-volumes-5764/pvc-w767b\"\ncsi-mock-volumes-5764                78s         Normal    ProvisioningSucceeded      persistentvolumeclaim/pvc-w767b                                  Successfully provisioned volume pvc-3cfd9af1-461a-4c78-9ed9-b85353a3391b\ncsi-mock-volumes-5764                57s         Warning   ExternalExpanding          persistentvolumeclaim/pvc-w767b                                  Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\ncsi-mock-volumes-5764                56s         Normal    Resizing                   persistentvolumeclaim/pvc-w767b                                  External resizer is resizing volume pvc-3cfd9af1-461a-4c78-9ed9-b85353a3391b\ncsi-mock-volumes-5764                56s         Normal    FileSystemResizeRequired   persistentvolumeclaim/pvc-w767b                                  Require file system resize of volume on node\ncsi-mock-volumes-9568                97s         Normal    Pulled                     pod/csi-mockplugin-0                                             Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\ncsi-mock-volumes-9568                97s         Normal    Created                    pod/csi-mockplugin-0                                             Created container csi-provisioner\ncsi-mock-volumes-9568                96s         Normal    Started                    pod/csi-mockplugin-0                                             Started container csi-provisioner\ncsi-mock-volumes-9568                96s         Normal    Pulled                     pod/csi-mockplugin-0                                             Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\ncsi-mock-volumes-9568                96s         Normal    Created                    pod/csi-mockplugin-0                                             Created container driver-registrar\ncsi-mock-volumes-9568                96s         Normal    Started                    pod/csi-mockplugin-0                                             Started container driver-registrar\ncsi-mock-volumes-9568                96s         Normal    Pulled                     pod/csi-mockplugin-0                                             Container image \"quay.io/k8scsi/mock-driver:v2.1.0\" already present on machine\ncsi-mock-volumes-9568                96s         Normal    Created                    pod/csi-mockplugin-0                                             Created container mock\ncsi-mock-volumes-9568                96s         Normal    Started                    pod/csi-mockplugin-0                                             Started container mock\ncsi-mock-volumes-9568                97s         Normal    Pulled                     pod/csi-mockplugin-attacher-0                                    Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\ncsi-mock-volumes-9568                97s         Normal    Created                    pod/csi-mockplugin-attacher-0                                    Created container csi-attacher\ncsi-mock-volumes-9568                96s         Normal    Started                    pod/csi-mockplugin-attacher-0                                    Started container csi-attacher\ncsi-mock-volumes-9568                99s         Normal    SuccessfulCreate           statefulset/csi-mockplugin-attacher                              create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\ncsi-mock-volumes-9568                99s         Normal    SuccessfulCreate           statefulset/csi-mockplugin                                       create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-9568                98s         Normal    ExternalProvisioning       persistentvolumeclaim/pvc-bnsn5                                  waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-9568\" or manually created by system administrator\ncsi-mock-volumes-9568                94s         Normal    Provisioning               persistentvolumeclaim/pvc-bnsn5                                  External provisioner is provisioning volume for claim \"csi-mock-volumes-9568/pvc-bnsn5\"\ncsi-mock-volumes-9568                93s         Normal    ProvisioningSucceeded      persistentvolumeclaim/pvc-bnsn5                                  Successfully provisioned volume pvc-a20458b4-e2ac-4d8e-9a94-5ba724edf265\ncsi-mock-volumes-9568                84s         Normal    SuccessfulAttachVolume     pod/pvc-volume-tester-jckxv                                      AttachVolume.Attach succeeded for volume \"pvc-a20458b4-e2ac-4d8e-9a94-5ba724edf265\"\ncsi-mock-volumes-9568                80s         Normal    Pulled                     pod/pvc-volume-tester-jckxv                                      Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-9568                80s         Normal    Created                    pod/pvc-volume-tester-jckxv                                      Created container volume-tester\ncsi-mock-volumes-9568                79s         Normal    Started                    pod/pvc-volume-tester-jckxv                                      Started container volume-tester\ncsi-mock-volumes-9568                76s         Normal    Killing                    pod/pvc-volume-tester-jckxv                                      Stopping container volume-tester\ndefault                              8m14s       Normal    Starting                   node/bootstrap-e2e-master                                        Starting kubelet.\ndefault                              8m14s       Normal    NodeHasSufficientMemory    node/bootstrap-e2e-master                                        Node bootstrap-e2e-master status is now: NodeHasSufficientMemory\ndefault                              8m14s       Normal    NodeHasNoDiskPressure      node/bootstrap-e2e-master                                        Node bootstrap-e2e-master status is now: NodeHasNoDiskPressure\ndefault                              8m14s       Normal    NodeHasSufficientPID       node/bootstrap-e2e-master                                        Node bootstrap-e2e-master status is now: NodeHasSufficientPID\ndefault                              8m14s       Normal    NodeNotSchedulable         node/bootstrap-e2e-master                                        Node bootstrap-e2e-master status is now: NodeNotSchedulable\ndefault                              8m13s       Normal    NodeAllocatableEnforced    node/bootstrap-e2e-master                                        Updated Node Allocatable limit across pods\ndefault                              8m13s       Normal    NodeReady                  node/bootstrap-e2e-master                                        Node bootstrap-e2e-master status is now: NodeReady\ndefault                              8m9s        Normal    RegisteredNode             node/bootstrap-e2e-master                                        Node bootstrap-e2e-master event: Registered Node bootstrap-e2e-master in Controller\ndefault                              8m25s       Normal    Starting                   node/bootstrap-e2e-minion-group-6tqd                             Starting kubelet.\ndefault                              8m25s       Normal    NodeHasSufficientMemory    node/bootstrap-e2e-minion-group-6tqd                             Node bootstrap-e2e-minion-group-6tqd status is now: NodeHasSufficientMemory\ndefault                              8m25s       Normal    NodeHasNoDiskPressure      node/bootstrap-e2e-minion-group-6tqd                             Node bootstrap-e2e-minion-group-6tqd status is now: NodeHasNoDiskPressure\ndefault                              8m25s       Normal    NodeHasSufficientPID       node/bootstrap-e2e-minion-group-6tqd                             Node bootstrap-e2e-minion-group-6tqd status is now: NodeHasSufficientPID\ndefault                              8m25s       Normal    NodeAllocatableEnforced    node/bootstrap-e2e-minion-group-6tqd                             Updated Node Allocatable limit across pods\ndefault                              8m24s       Normal    RegisteredNode             node/bootstrap-e2e-minion-group-6tqd                             Node bootstrap-e2e-minion-group-6tqd event: Registered Node bootstrap-e2e-minion-group-6tqd in Controller\ndefault                              8m23s       Normal    Starting                   node/bootstrap-e2e-minion-group-6tqd                             Starting kube-proxy.\ndefault                              8m20s       Warning   ContainerdStart            node/bootstrap-e2e-minion-group-6tqd                             Starting containerd container runtime...\ndefault                              8m20s       Warning   DockerStart                node/bootstrap-e2e-minion-group-6tqd                             Starting Docker Application Container Engine...\ndefault                              8m20s       Warning   KubeletStart               node/bootstrap-e2e-minion-group-6tqd                             Started Kubernetes kubelet.\ndefault                              8m15s       Normal    NodeReady                  node/bootstrap-e2e-minion-group-6tqd                             Node bootstrap-e2e-minion-group-6tqd status is now: NodeReady\ndefault                              8m27s       Normal    Starting                   node/bootstrap-e2e-minion-group-d58v                             Starting kubelet.\ndefault                              8m27s       Normal    NodeHasSufficientMemory    node/bootstrap-e2e-minion-group-d58v                             Node bootstrap-e2e-minion-group-d58v status is now: NodeHasSufficientMemory\ndefault                              8m27s       Normal    NodeHasNoDiskPressure      node/bootstrap-e2e-minion-group-d58v                             Node bootstrap-e2e-minion-group-d58v status is now: NodeHasNoDiskPressure\ndefault                              8m27s       Normal    NodeHasSufficientPID       node/bootstrap-e2e-minion-group-d58v                             Node bootstrap-e2e-minion-group-d58v status is now: NodeHasSufficientPID\ndefault                              8m27s       Normal    NodeAllocatableEnforced    node/bootstrap-e2e-minion-group-d58v                             Updated Node Allocatable limit across pods\ndefault                              8m25s       Warning   ContainerdStart            node/bootstrap-e2e-minion-group-d58v                             Starting containerd container runtime...\ndefault                              8m25s       Warning   DockerStart                node/bootstrap-e2e-minion-group-d58v                             Starting Docker Application Container Engine...\ndefault                              8m25s       Warning   KubeletStart               node/bootstrap-e2e-minion-group-d58v                             Started Kubernetes kubelet.\ndefault                              8m24s       Normal    Starting                   node/bootstrap-e2e-minion-group-d58v                             Starting kube-proxy.\ndefault                              8m24s       Normal    RegisteredNode             node/bootstrap-e2e-minion-group-d58v                             Node bootstrap-e2e-minion-group-d58v event: Registered Node bootstrap-e2e-minion-group-d58v in Controller\ndefault                              8m16s       Normal    NodeReady                  node/bootstrap-e2e-minion-group-d58v                             Node bootstrap-e2e-minion-group-d58v status is now: NodeReady\ndefault                              8m26s       Normal    Starting                   node/bootstrap-e2e-minion-group-w9fq                             Starting kubelet.\ndefault                              8m26s       Normal    NodeHasSufficientMemory    node/bootstrap-e2e-minion-group-w9fq                             Node bootstrap-e2e-minion-group-w9fq status is now: NodeHasSufficientMemory\ndefault                              8m26s       Normal    NodeHasNoDiskPressure      node/bootstrap-e2e-minion-group-w9fq                             Node bootstrap-e2e-minion-group-w9fq status is now: NodeHasNoDiskPressure\ndefault                              8m26s       Normal    NodeHasSufficientPID       node/bootstrap-e2e-minion-group-w9fq                             Node bootstrap-e2e-minion-group-w9fq status is now: NodeHasSufficientPID\ndefault                              8m26s       Normal    NodeAllocatableEnforced    node/bootstrap-e2e-minion-group-w9fq                             Updated Node Allocatable limit across pods\ndefault                              8m25s       Warning   ContainerdStart            node/bootstrap-e2e-minion-group-w9fq                             Starting containerd container runtime...\ndefault                              8m25s       Warning   DockerStart                node/bootstrap-e2e-minion-group-w9fq                             Starting Docker Application Container Engine...\ndefault                              8m25s       Warning   KubeletStart               node/bootstrap-e2e-minion-group-w9fq                             Started Kubernetes kubelet.\ndefault                              8m24s       Normal    RegisteredNode             node/bootstrap-e2e-minion-group-w9fq                             Node bootstrap-e2e-minion-group-w9fq event: Registered Node bootstrap-e2e-minion-group-w9fq in Controller\ndefault                              8m24s       Normal    Starting                   node/bootstrap-e2e-minion-group-w9fq                             Starting kube-proxy.\ndefault                              8m16s       Normal    NodeReady                  node/bootstrap-e2e-minion-group-w9fq                             Node bootstrap-e2e-minion-group-w9fq status is now: NodeReady\ndefault                              8m26s       Normal    Starting                   node/bootstrap-e2e-minion-group-zzr9                             Starting kubelet.\ndefault                              8m25s       Normal    NodeHasSufficientMemory    node/bootstrap-e2e-minion-group-zzr9                             Node bootstrap-e2e-minion-group-zzr9 status is now: NodeHasSufficientMemory\ndefault                              8m25s       Normal    NodeHasNoDiskPressure      node/bootstrap-e2e-minion-group-zzr9                             Node bootstrap-e2e-minion-group-zzr9 status is now: NodeHasNoDiskPressure\ndefault                              8m25s       Normal    NodeHasSufficientPID       node/bootstrap-e2e-minion-group-zzr9                             Node bootstrap-e2e-minion-group-zzr9 status is now: NodeHasSufficientPID\ndefault                              8m25s       Normal    NodeAllocatableEnforced    node/bootstrap-e2e-minion-group-zzr9                             Updated Node Allocatable limit across pods\ndefault                              8m25s       Warning   ContainerdStart            node/bootstrap-e2e-minion-group-zzr9                             Starting containerd container runtime...\ndefault                              8m25s       Warning   DockerStart                node/bootstrap-e2e-minion-group-zzr9                             Starting Docker Application Container Engine...\ndefault                              8m25s       Warning   KubeletStart               node/bootstrap-e2e-minion-group-zzr9                             Started Kubernetes kubelet.\ndefault                              8m24s       Normal    RegisteredNode             node/bootstrap-e2e-minion-group-zzr9                             Node bootstrap-e2e-minion-group-zzr9 event: Registered Node bootstrap-e2e-minion-group-zzr9 in Controller\ndefault                              8m23s       Normal    Starting                   node/bootstrap-e2e-minion-group-zzr9                             Starting kube-proxy.\ndefault                              8m15s       Normal    NodeReady                  node/bootstrap-e2e-minion-group-zzr9                             Node bootstrap-e2e-minion-group-zzr9 status is now: NodeReady\ndefault                              3m8s        Normal    RecyclerPod                persistentvolume/nfs-nff4f                                       Recycler pod: Successfully assigned default/recycler-for-nfs-nff4f to bootstrap-e2e-minion-group-w9fq\ndefault                              3m8s        Normal    RecyclerPod                persistentvolume/nfs-nff4f                                       Recycler pod: Pulling image \"k8s.gcr.io/busybox:1.27\"\ndefault                              3m8s        Normal    RecyclerPod                persistentvolume/nfs-nff4f                                       Recycler pod: Successfully pulled image \"k8s.gcr.io/busybox:1.27\"\ndefault                              3m          Normal    RecyclerPod                persistentvolume/nfs-nff4f                                       Recycler pod: Created container pv-recycler\ndefault                              2m58s       Normal    RecyclerPod                persistentvolume/nfs-nff4f                                       Recycler pod: Started container pv-recycler\ndefault                              3m18s       Normal    VolumeRecycled             persistentvolume/nfs-nff4f                                       Volume recycled\ndefault                              3m1s        Normal    RecyclerPod                persistentvolume/nfs-nff4f                                       Recycler pod: Container image \"k8s.gcr.io/busybox:1.27\" already present on machine\ndefault                              68s         Normal    VolumeDelete               persistentvolume/pvc-641881b0-4b74-4b13-8426-c3def2bf70d2        googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-641881b0-4b74-4b13-8426-c3def2bf70d2' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-w9fq', resourceInUseByAnotherResource\ndefault                              4m18s       Normal    VolumeDelete               persistentvolume/pvc-83967d6a-3d6e-4389-bbd6-70cbb2ac2120        googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-83967d6a-3d6e-4389-bbd6-70cbb2ac2120' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-6tqd', resourceInUseByAnotherResource\ndefault                              85s         Normal    VolumeDelete               persistentvolume/pvc-8ec79ab5-69d7-4a7d-a987-12cfbc80a7ce        googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-8ec79ab5-69d7-4a7d-a987-12cfbc80a7ce' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-d58v', resourceInUseByAnotherResource\ndefault                              101s        Normal    VolumeDelete               persistentvolume/pvc-9ea87585-4418-4dc8-9d36-629e33ac58aa        googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-9ea87585-4418-4dc8-9d36-629e33ac58aa' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-zzr9', resourceInUseByAnotherResource\ndefault                              2m23s       Normal    VolumeDelete               persistentvolume/pvc-a1150249-863d-494a-a543-c6ad98240d7d        googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-a1150249-863d-494a-a543-c6ad98240d7d' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-w9fq', resourceInUseByAnotherResource\ndefault                              2m32s       Normal    VolumeDelete               persistentvolume/pvc-a1dc20a4-e9d1-44aa-8edf-26bcaebcc737        googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-a1dc20a4-e9d1-44aa-8edf-26bcaebcc737' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-w9fq', resourceInUseByAnotherResource\ndefault                              2m57s       Normal    VolumeDelete               persistentvolume/pvc-c5845b7d-835f-4eb6-96cd-707976f477fa        googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-c5845b7d-835f-4eb6-96cd-707976f477fa' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-w9fq', resourceInUseByAnotherResource\ndefault                              3m26s       Normal    Scheduled                  pod/recycler-for-nfs-nff4f                                       Successfully assigned default/recycler-for-nfs-nff4f to bootstrap-e2e-minion-group-w9fq\ndefault                              3m22s       Normal    Pulling                    pod/recycler-for-nfs-nff4f                                       Pulling image \"k8s.gcr.io/busybox:1.27\"\ndefault                              3m21s       Normal    Pulled                     pod/recycler-for-nfs-nff4f                                       Successfully pulled image \"k8s.gcr.io/busybox:1.27\"\ndefault                              3m21s       Normal    Created                    pod/recycler-for-nfs-nff4f                                       Created container pv-recycler\ndefault                              3m20s       Normal    Started                    pod/recycler-for-nfs-nff4f                                       Started container pv-recycler\ndefault                              3m8s        Normal    Scheduled                  pod/recycler-for-nfs-nff4f                                       Successfully assigned default/recycler-for-nfs-nff4f to bootstrap-e2e-minion-group-w9fq\ndefault                              3m1s        Normal    Pulled                     pod/recycler-for-nfs-nff4f                                       Container image \"k8s.gcr.io/busybox:1.27\" already present on machine\ndefault                              3m1s        Normal    Created                    pod/recycler-for-nfs-nff4f                                       Created container pv-recycler\ndefault                              2m58s       Normal    Started                    pod/recycler-for-nfs-nff4f                                       Started container pv-recycler\ndefault                              45s         Warning   FailedToCreateEndpoint     endpoints/tolerate-unready                                       Failed to create endpoint for service services-4906/tolerate-unready: endpoints \"tolerate-unready\" already exists\ndeployment-6078                      20s         Normal    Scheduled                  pod/webserver-deployment-595b5b9587-4227m                        Successfully assigned deployment-6078/webserver-deployment-595b5b9587-4227m to bootstrap-e2e-minion-group-w9fq\ndeployment-6078                      16s         Normal    Pulled                     pod/webserver-deployment-595b5b9587-4227m                        Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-6078                      16s         Normal    Created                    pod/webserver-deployment-595b5b9587-4227m                        Created container httpd\ndeployment-6078                      15s         Normal    Started                    pod/webserver-deployment-595b5b9587-4227m                        Started container httpd\ndeployment-6078                      0s          Normal    Killing                    pod/webserver-deployment-595b5b9587-4227m                        Stopping container httpd\ndeployment-6078                      20s         Normal    Scheduled                  pod/webserver-deployment-595b5b9587-7jp6h                        Successfully assigned deployment-6078/webserver-deployment-595b5b9587-7jp6h to bootstrap-e2e-minion-group-w9fq\ndeployment-6078                      16s         Normal    Pulled                     pod/webserver-deployment-595b5b9587-7jp6h                        Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-6078                      16s         Normal    Created                    pod/webserver-deployment-595b5b9587-7jp6h                        Created container httpd\ndeployment-6078                      15s         Normal    Started                    pod/webserver-deployment-595b5b9587-7jp6h                        Started container httpd\ndeployment-6078                      20s         Normal    Scheduled                  pod/webserver-deployment-595b5b9587-9697g                        Successfully assigned deployment-6078/webserver-deployment-595b5b9587-9697g to bootstrap-e2e-minion-group-w9fq\ndeployment-6078                      15s         Normal    Pulled                     pod/webserver-deployment-595b5b9587-9697g                        Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-6078                      15s         Normal    Created                    pod/webserver-deployment-595b5b9587-9697g                        Created container httpd\ndeployment-6078                      14s         Normal    Started                    pod/webserver-deployment-595b5b9587-9697g                        Started container httpd\ndeployment-6078                      0s          Normal    Killing                    pod/webserver-deployment-595b5b9587-9697g                        Stopping container httpd\ndeployment-6078                      20s         Normal    Scheduled                  pod/webserver-deployment-595b5b9587-bqfdc                        Successfully assigned deployment-6078/webserver-deployment-595b5b9587-bqfdc to bootstrap-e2e-minion-group-6tqd\ndeployment-6078                      15s         Normal    Pulled                     pod/webserver-deployment-595b5b9587-bqfdc                        Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-6078                      14s         Normal    Created                    pod/webserver-deployment-595b5b9587-bqfdc                        Created container httpd\ndeployment-6078                      13s         Normal    Started                    pod/webserver-deployment-595b5b9587-bqfdc                        Started container httpd\ndeployment-6078                      20s         Normal    Scheduled                  pod/webserver-deployment-595b5b9587-fncww                        Successfully assigned deployment-6078/webserver-deployment-595b5b9587-fncww to bootstrap-e2e-minion-group-6tqd\ndeployment-6078                      15s         Normal    Pulled                     pod/webserver-deployment-595b5b9587-fncww                        Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-6078                      15s         Normal    Created                    pod/webserver-deployment-595b5b9587-fncww                        Created container httpd\ndeployment-6078                      14s         Normal    Started                    pod/webserver-deployment-595b5b9587-fncww                        Started container httpd\ndeployment-6078                      20s         Normal    Scheduled                  pod/webserver-deployment-595b5b9587-gb87n                        Successfully assigned deployment-6078/webserver-deployment-595b5b9587-gb87n to bootstrap-e2e-minion-group-zzr9\ndeployment-6078                      15s         Normal    Pulled                     pod/webserver-deployment-595b5b9587-gb87n                        Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-6078                      15s         Normal    Created                    pod/webserver-deployment-595b5b9587-gb87n                        Created container httpd\ndeployment-6078                      14s         Normal    Started                    pod/webserver-deployment-595b5b9587-gb87n                        Started container httpd\ndeployment-6078                      20s         Normal    Scheduled                  pod/webserver-deployment-595b5b9587-j5czc                        Successfully assigned deployment-6078/webserver-deployment-595b5b9587-j5czc to bootstrap-e2e-minion-group-w9fq\ndeployment-6078                      15s         Normal    Pulled                     pod/webserver-deployment-595b5b9587-j5czc                        Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-6078                      15s         Normal    Created                    pod/webserver-deployment-595b5b9587-j5czc                        Created container httpd\ndeployment-6078                      14s         Normal    Started                    pod/webserver-deployment-595b5b9587-j5czc                        Started container httpd\ndeployment-6078                      20s         Normal    Scheduled                  pod/webserver-deployment-595b5b9587-jqdfh                        Successfully assigned deployment-6078/webserver-deployment-595b5b9587-jqdfh to bootstrap-e2e-minion-group-6tqd\ndeployment-6078                      15s         Normal    Pulled                     pod/webserver-deployment-595b5b9587-jqdfh                        Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-6078                      15s         Normal    Created                    pod/webserver-deployment-595b5b9587-jqdfh                        Created container httpd\ndeployment-6078                      14s         Normal    Started                    pod/webserver-deployment-595b5b9587-jqdfh                        Started container httpd\ndeployment-6078                      21s         Normal    Scheduled                  pod/webserver-deployment-595b5b9587-p2jgg                        Successfully assigned deployment-6078/webserver-deployment-595b5b9587-p2jgg to bootstrap-e2e-minion-group-w9fq\ndeployment-6078                      17s         Normal    Pulled                     pod/webserver-deployment-595b5b9587-p2jgg                        Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-6078                      17s         Normal    Created                    pod/webserver-deployment-595b5b9587-p2jgg                        Created container httpd\ndeployment-6078                      16s         Normal    Started                    pod/webserver-deployment-595b5b9587-p2jgg                        Started container httpd\ndeployment-6078                      20s         Normal    Scheduled                  pod/webserver-deployment-595b5b9587-sqrvx                        Successfully assigned deployment-6078/webserver-deployment-595b5b9587-sqrvx to bootstrap-e2e-minion-group-zzr9\ndeployment-6078                      14s         Normal    Pulled                     pod/webserver-deployment-595b5b9587-sqrvx                        Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-6078                      14s         Normal    Created                    pod/webserver-deployment-595b5b9587-sqrvx                        Created container httpd\ndeployment-6078                      11s         Normal    Started                    pod/webserver-deployment-595b5b9587-sqrvx                        Started container httpd\ndeployment-6078                      21s         Normal    SuccessfulCreate           replicaset/webserver-deployment-595b5b9587                       Created pod: webserver-deployment-595b5b9587-p2jgg\ndeployment-6078                      21s         Normal    SuccessfulCreate           replicaset/webserver-deployment-595b5b9587                       Created pod: webserver-deployment-595b5b9587-fncww\ndeployment-6078                      21s         Normal    SuccessfulCreate           replicaset/webserver-deployment-595b5b9587                       Created pod: webserver-deployment-595b5b9587-j5czc\ndeployment-6078                      20s         Normal    SuccessfulCreate           replicaset/webserver-deployment-595b5b9587                       Created pod: webserver-deployment-595b5b9587-9697g\ndeployment-6078                      20s         Normal    SuccessfulCreate           replicaset/webserver-deployment-595b5b9587                       Created pod: webserver-deployment-595b5b9587-bqfdc\ndeployment-6078                      20s         Normal    SuccessfulCreate           replicaset/webserver-deployment-595b5b9587                       Created pod: webserver-deployment-595b5b9587-gb87n\ndeployment-6078                      20s         Normal    SuccessfulCreate           replicaset/webserver-deployment-595b5b9587                       Created pod: webserver-deployment-595b5b9587-7jp6h\ndeployment-6078                      20s         Normal    SuccessfulCreate           replicaset/webserver-deployment-595b5b9587                       Created pod: webserver-deployment-595b5b9587-4227m\ndeployment-6078                      20s         Normal    SuccessfulCreate           replicaset/webserver-deployment-595b5b9587                       Created pod: webserver-deployment-595b5b9587-jqdfh\ndeployment-6078                      19s         Normal    SuccessfulCreate           replicaset/webserver-deployment-595b5b9587                       (combined from similar events): Created pod: webserver-deployment-595b5b9587-sqrvx\ndeployment-6078                      0s          Normal    SuccessfulDelete           replicaset/webserver-deployment-595b5b9587                       Deleted pod: webserver-deployment-595b5b9587-4227m\ndeployment-6078                      0s          Normal    SuccessfulDelete           replicaset/webserver-deployment-595b5b9587                       Deleted pod: webserver-deployment-595b5b9587-9697g\ndeployment-6078                      0s          Normal    Scheduled                  pod/webserver-deployment-c7997dcc8-d7p48                         Successfully assigned deployment-6078/webserver-deployment-c7997dcc8-d7p48 to bootstrap-e2e-minion-group-zzr9\ndeployment-6078                      1s          Normal    Scheduled                  pod/webserver-deployment-c7997dcc8-dhtxv                         Successfully assigned deployment-6078/webserver-deployment-c7997dcc8-dhtxv to bootstrap-e2e-minion-group-zzr9\ndeployment-6078                      1s          Normal    Scheduled                  pod/webserver-deployment-c7997dcc8-mj9sd                         Successfully assigned deployment-6078/webserver-deployment-c7997dcc8-mj9sd to bootstrap-e2e-minion-group-zzr9\ndeployment-6078                      1s          Normal    Scheduled                  pod/webserver-deployment-c7997dcc8-rcsvj                         Successfully assigned deployment-6078/webserver-deployment-c7997dcc8-rcsvj to bootstrap-e2e-minion-group-w9fq\ndeployment-6078                      1s          Normal    SuccessfulCreate           replicaset/webserver-deployment-c7997dcc8                        Created pod: webserver-deployment-c7997dcc8-mj9sd\ndeployment-6078                      1s          Normal    SuccessfulCreate           replicaset/webserver-deployment-c7997dcc8                        Created pod: webserver-deployment-c7997dcc8-rcsvj\ndeployment-6078                      1s          Normal    SuccessfulCreate           replicaset/webserver-deployment-c7997dcc8                        Created pod: webserver-deployment-c7997dcc8-dhtxv\ndeployment-6078                      0s          Normal    SuccessfulCreate           replicaset/webserver-deployment-c7997dcc8                        Created pod: webserver-deployment-c7997dcc8-d7p48\ndeployment-6078                      0s          Normal    SuccessfulCreate           replicaset/webserver-deployment-c7997dcc8                        Created pod: webserver-deployment-c7997dcc8-4spqp\ndeployment-6078                      21s         Normal    ScalingReplicaSet          deployment/webserver-deployment                                  Scaled up replica set webserver-deployment-595b5b9587 to 10\ndeployment-6078                      1s          Normal    ScalingReplicaSet          deployment/webserver-deployment                                  Scaled up replica set webserver-deployment-c7997dcc8 to 3\ndeployment-6078                      1s          Normal    ScalingReplicaSet          deployment/webserver-deployment                                  Scaled down replica set webserver-deployment-595b5b9587 to 8\ndeployment-6078                      0s          Normal    ScalingReplicaSet          deployment/webserver-deployment                                  Scaled up replica set webserver-deployment-c7997dcc8 to 5\ndeployment-8366                      88s         Normal    Scheduled                  pod/test-new-deployment-595b5b9587-6dc66                         Successfully assigned deployment-8366/test-new-deployment-595b5b9587-6dc66 to bootstrap-e2e-minion-group-6tqd\ndeployment-8366                      87s         Warning   FailedMount                pod/test-new-deployment-595b5b9587-6dc66                         MountVolume.SetUp failed for volume \"default-token-slvzv\" : failed to sync secret cache: timed out waiting for the condition\ndeployment-8366                      82s         Normal    Pulling                    pod/test-new-deployment-595b5b9587-6dc66                         Pulling image \"docker.io/library/httpd:2.4.38-alpine\"\ndeployment-8366                      64s         Normal    Pulled                     pod/test-new-deployment-595b5b9587-6dc66                         Successfully pulled image \"docker.io/library/httpd:2.4.38-alpine\"\ndeployment-8366                      64s         Normal    Created                    pod/test-new-deployment-595b5b9587-6dc66                         Created container httpd\ndeployment-8366                      63s         Normal    Started                    pod/test-new-deployment-595b5b9587-6dc66                         Started container httpd\ndeployment-8366                      58s         Normal    Killing                    pod/test-new-deployment-595b5b9587-6dc66                         Stopping container httpd\ndeployment-8366                      88s         Normal    SuccessfulCreate           replicaset/test-new-deployment-595b5b9587                        Created pod: test-new-deployment-595b5b9587-6dc66\ndeployment-8366                      89s         Normal    ScalingReplicaSet          deployment/test-new-deployment                                   Scaled up replica set test-new-deployment-595b5b9587 to 1\ndns-1601                             90s         Normal    Scheduled                  pod/dns-test-a150d343-1eda-42e0-9956-b9403d94c12a                Successfully assigned dns-1601/dns-test-a150d343-1eda-42e0-9956-b9403d94c12a to bootstrap-e2e-minion-group-6tqd\ndns-1601                             88s         Normal    Pulled                     pod/dns-test-a150d343-1eda-42e0-9956-b9403d94c12a                Container image \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\" already present on machine\ndns-1601                             88s         Normal    Created                    pod/dns-test-a150d343-1eda-42e0-9956-b9403d94c12a                Created container webserver\ndns-1601                             88s         Normal    Started                    pod/dns-test-a150d343-1eda-42e0-9956-b9403d94c12a                Started container webserver\ndns-1601                             88s         Normal    Pulled                     pod/dns-test-a150d343-1eda-42e0-9956-b9403d94c12a                Container image \"gcr.io/kubernetes-e2e-test-images/dnsutils:1.1\" already present on machine\ndns-1601                             88s         Normal    Created                    pod/dns-test-a150d343-1eda-42e0-9956-b9403d94c12a                Created container querier\ndns-1601                             87s         Normal    Started                    pod/dns-test-a150d343-1eda-42e0-9956-b9403d94c12a                Started container querier\ndns-1601                             87s         Normal    Pulling                    pod/dns-test-a150d343-1eda-42e0-9956-b9403d94c12a                Pulling image \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0\"\ndns-1601                             62s         Normal    Pulled                     pod/dns-test-a150d343-1eda-42e0-9956-b9403d94c12a                Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0\"\ndns-1601                             62s         Normal    Created                    pod/dns-test-a150d343-1eda-42e0-9956-b9403d94c12a                Created container jessie-querier\ndns-1601                             62s         Normal    Started                    pod/dns-test-a150d343-1eda-42e0-9956-b9403d94c12a                Started container jessie-querier\ndns-1601                             47s         Normal    Killing                    pod/dns-test-a150d343-1eda-42e0-9956-b9403d94c12a                Stopping container webserver\ndns-1601                             47s         Normal    Killing                    pod/dns-test-a150d343-1eda-42e0-9956-b9403d94c12a                Stopping container jessie-querier\ndns-1601                             47s         Normal    Killing                    pod/dns-test-a150d343-1eda-42e0-9956-b9403d94c12a                Stopping container querier\ndns-1601                             89s         Warning   FailedToUpdateEndpoint     endpoints/dns-test-service-2                                     Failed to update endpoint dns-1601/dns-test-service-2: Operation cannot be fulfilled on endpoints \"dns-test-service-2\": the object has been modified; please apply your changes to the latest version and try again\nemptydir-wrapper-6520                15s         Normal    Scheduled                  pod/pod-secrets-3ca7552a-1a6b-4ed9-85f4-f0ce8f39b2fe             Successfully assigned emptydir-wrapper-6520/pod-secrets-3ca7552a-1a6b-4ed9-85f4-f0ce8f39b2fe to bootstrap-e2e-minion-group-zzr9\nemptydir-wrapper-6520                11s         Normal    Pulled                     pod/pod-secrets-3ca7552a-1a6b-4ed9-85f4-f0ce8f39b2fe             Container image \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\" already present on machine\nemptydir-wrapper-6520                11s         Normal    Created                    pod/pod-secrets-3ca7552a-1a6b-4ed9-85f4-f0ce8f39b2fe             Created container secret-test\nemptydir-wrapper-6520                9s          Normal    Started                    pod/pod-secrets-3ca7552a-1a6b-4ed9-85f4-f0ce8f39b2fe             Started container secret-test\nemptydir-wrapper-6520                1s          Normal    Killing                    pod/pod-secrets-3ca7552a-1a6b-4ed9-85f4-f0ce8f39b2fe             Stopping container secret-test\nephemeral-1724                       39s         Normal    Pulled                     pod/csi-hostpath-attacher-0                                      Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\nephemeral-1724                       39s         Normal    Created                    pod/csi-hostpath-attacher-0                                      Created container csi-attacher\nephemeral-1724                       36s         Normal    Started                    pod/csi-hostpath-attacher-0                                      Started container csi-attacher\nephemeral-1724                       44s         Warning   FailedCreate               statefulset/csi-hostpath-attacher                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-1724                       43s         Normal    SuccessfulCreate           statefulset/csi-hostpath-attacher                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nephemeral-1724                       37s         Normal    Pulled                     pod/csi-hostpath-provisioner-0                                   Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nephemeral-1724                       36s         Normal    Created                    pod/csi-hostpath-provisioner-0                                   Created container csi-provisioner\nephemeral-1724                       35s         Normal    Started                    pod/csi-hostpath-provisioner-0                                   Started container csi-provisioner\nephemeral-1724                       44s         Warning   FailedCreate               statefulset/csi-hostpath-provisioner                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-1724                       43s         Normal    SuccessfulCreate           statefulset/csi-hostpath-provisioner                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nephemeral-1724                       42s         Warning   FailedMount                pod/csi-hostpath-resizer-0                                       MountVolume.SetUp failed for volume \"csi-resizer-token-pg2v6\" : failed to sync secret cache: timed out waiting for the condition\nephemeral-1724                       36s         Normal    Pulled                     pod/csi-hostpath-resizer-0                                       Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\nephemeral-1724                       36s         Normal    Created                    pod/csi-hostpath-resizer-0                                       Created container csi-resizer\nephemeral-1724                       35s         Normal    Started                    pod/csi-hostpath-resizer-0                                       Started container csi-resizer\nephemeral-1724                       44s         Warning   FailedCreate               statefulset/csi-hostpath-resizer                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-1724                       43s         Normal    SuccessfulCreate           statefulset/csi-hostpath-resizer                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nephemeral-1724                       40s         Normal    Pulled                     pod/csi-hostpathplugin-0                                         Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nephemeral-1724                       40s         Normal    Created                    pod/csi-hostpathplugin-0                                         Created container node-driver-registrar\nephemeral-1724                       39s         Normal    Started                    pod/csi-hostpathplugin-0                                         Started container node-driver-registrar\nephemeral-1724                       39s         Normal    Pulled                     pod/csi-hostpathplugin-0                                         Container image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\" already present on machine\nephemeral-1724                       39s         Normal    Created                    pod/csi-hostpathplugin-0                                         Created container hostpath\nephemeral-1724                       36s         Normal    Started                    pod/csi-hostpathplugin-0                                         Started container hostpath\nephemeral-1724                       36s         Normal    Pulled                     pod/csi-hostpathplugin-0                                         Container image \"quay.io/k8scsi/livenessprobe:v1.1.0\" already present on machine\nephemeral-1724                       35s         Normal    Created                    pod/csi-hostpathplugin-0                                         Created container liveness-probe\nephemeral-1724                       34s         Normal    Started                    pod/csi-hostpathplugin-0                                         Started container liveness-probe\nephemeral-1724                       45s         Normal    SuccessfulCreate           statefulset/csi-hostpathplugin                                   create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nephemeral-1724                       42s         Warning   FailedMount                pod/csi-snapshotter-0                                            MountVolume.SetUp failed for volume \"csi-snapshotter-token-z8vxv\" : failed to sync secret cache: timed out waiting for the condition\nephemeral-1724                       37s         Normal    Pulled                     pod/csi-snapshotter-0                                            Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nephemeral-1724                       37s         Normal    Created                    pod/csi-snapshotter-0                                            Created container csi-snapshotter\nephemeral-1724                       35s         Normal    Started                    pod/csi-snapshotter-0                                            Started container csi-snapshotter\nephemeral-1724                       43s         Normal    SuccessfulCreate           statefulset/csi-snapshotter                                      create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nephemeral-1724                       35s         Warning   FailedMount                pod/inline-volume-tester-nx984                                   MountVolume.SetUp failed for volume \"my-volume-0\" : kubernetes.io/csi: mounter.SetUpAt failed to get CSI client: driver name csi-hostpath-ephemeral-1724 not found in the list of registered CSI drivers\nephemeral-1724                       26s         Normal    Pulled                     pod/inline-volume-tester-nx984                                   Container image \"docker.io/library/busybox:1.29\" already present on machine\nephemeral-1724                       26s         Normal    Created                    pod/inline-volume-tester-nx984                                   Created container csi-volume-tester\nephemeral-1724                       24s         Normal    Started                    pod/inline-volume-tester-nx984                                   Started container csi-volume-tester\nephemeral-1724                       16s         Normal    Pulled                     pod/inline-volume-tester2-khg5b                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nephemeral-1724                       16s         Normal    Created                    pod/inline-volume-tester2-khg5b                                  Created container csi-volume-tester\nephemeral-1724                       14s         Normal    Started                    pod/inline-volume-tester2-khg5b                                  Started container csi-volume-tester\nephemeral-1724                       8s          Normal    Killing                    pod/inline-volume-tester2-khg5b                                  Stopping container csi-volume-tester\nephemeral-6080                       15s         Normal    Pulled                     pod/csi-hostpath-attacher-0                                      Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\nephemeral-6080                       15s         Normal    Created                    pod/csi-hostpath-attacher-0                                      Created container csi-attacher\nephemeral-6080                       14s         Normal    Started                    pod/csi-hostpath-attacher-0                                      Started container csi-attacher\nephemeral-6080                       22s         Warning   FailedCreate               statefulset/csi-hostpath-attacher                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-6080                       21s         Normal    SuccessfulCreate           statefulset/csi-hostpath-attacher                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nephemeral-6080                       16s         Normal    Pulled                     pod/csi-hostpath-provisioner-0                                   Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nephemeral-6080                       16s         Normal    Created                    pod/csi-hostpath-provisioner-0                                   Created container csi-provisioner\nephemeral-6080                       14s         Normal    Started                    pod/csi-hostpath-provisioner-0                                   Started container csi-provisioner\nephemeral-6080                       22s         Warning   FailedCreate               statefulset/csi-hostpath-provisioner                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-6080                       21s         Normal    SuccessfulCreate           statefulset/csi-hostpath-provisioner                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nephemeral-6080                       17s         Normal    Pulled                     pod/csi-hostpath-resizer-0                                       Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\nephemeral-6080                       17s         Normal    Created                    pod/csi-hostpath-resizer-0                                       Created container csi-resizer\nephemeral-6080                       15s         Normal    Started                    pod/csi-hostpath-resizer-0                                       Started container csi-resizer\nephemeral-6080                       22s         Warning   FailedCreate               statefulset/csi-hostpath-resizer                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-6080                       22s         Normal    SuccessfulCreate           statefulset/csi-hostpath-resizer                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nephemeral-6080                       21s         Normal    Pulled                     pod/csi-hostpathplugin-0                                         Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nephemeral-6080                       21s         Normal    Created                    pod/csi-hostpathplugin-0                                         Created container node-driver-registrar\nephemeral-6080                       21s         Normal    Started                    pod/csi-hostpathplugin-0                                         Started container node-driver-registrar\nephemeral-6080                       21s         Normal    Pulled                     pod/csi-hostpathplugin-0                                         Container image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\" already present on machine\nephemeral-6080                       21s         Normal    Created                    pod/csi-hostpathplugin-0                                         Created container hostpath\nephemeral-6080                       19s         Normal    Started                    pod/csi-hostpathplugin-0                                         Started container hostpath\nephemeral-6080                       19s         Normal    Pulled                     pod/csi-hostpathplugin-0                                         Container image \"quay.io/k8scsi/livenessprobe:v1.1.0\" already present on machine\nephemeral-6080                       18s         Normal    Created                    pod/csi-hostpathplugin-0                                         Created container liveness-probe\nephemeral-6080                       16s         Normal    Started                    pod/csi-hostpathplugin-0                                         Started container liveness-probe\nephemeral-6080                       23s         Normal    SuccessfulCreate           statefulset/csi-hostpathplugin                                   create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nephemeral-6080                       18s         Normal    Pulled                     pod/csi-snapshotter-0                                            Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nephemeral-6080                       18s         Normal    Created                    pod/csi-snapshotter-0                                            Created container csi-snapshotter\nephemeral-6080                       16s         Normal    Started                    pod/csi-snapshotter-0                                            Started container csi-snapshotter\nephemeral-6080                       22s         Warning   FailedCreate               statefulset/csi-snapshotter                                      create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter failed error: pods \"csi-snapshotter-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-6080                       22s         Normal    SuccessfulCreate           statefulset/csi-snapshotter                                      create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nephemeral-6080                       19s         Warning   FailedMount                pod/inline-volume-tester-jpm9w                                   MountVolume.SetUp failed for volume \"my-volume-0\" : kubernetes.io/csi: mounter.SetUpAt failed to get CSI client: driver name csi-hostpath-ephemeral-6080 not found in the list of registered CSI drivers\nephemeral-6080                       13s         Normal    Pulled                     pod/inline-volume-tester-jpm9w                                   Container image \"docker.io/library/busybox:1.29\" already present on machine\nephemeral-6080                       13s         Normal    Created                    pod/inline-volume-tester-jpm9w                                   Created container csi-volume-tester\nephemeral-6080                       11s         Normal    Started                    pod/inline-volume-tester-jpm9w                                   Started container csi-volume-tester\nephemeral-726                        53s         Normal    Pulled                     pod/csi-hostpath-attacher-0                                      Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\nephemeral-726                        53s         Normal    Created                    pod/csi-hostpath-attacher-0                                      Created container csi-attacher\nephemeral-726                        52s         Normal    Started                    pod/csi-hostpath-attacher-0                                      Started container csi-attacher\nephemeral-726                        61s         Warning   FailedCreate               statefulset/csi-hostpath-attacher                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-726                        59s         Normal    SuccessfulCreate           statefulset/csi-hostpath-attacher                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nephemeral-726                        55s         Normal    Pulled                     pod/csi-hostpath-provisioner-0                                   Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nephemeral-726                        55s         Normal    Created                    pod/csi-hostpath-provisioner-0                                   Created container csi-provisioner\nephemeral-726                        54s         Normal    Started                    pod/csi-hostpath-provisioner-0                                   Started container csi-provisioner\nephemeral-726                        60s         Warning   FailedCreate               statefulset/csi-hostpath-provisioner                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-726                        59s         Normal    SuccessfulCreate           statefulset/csi-hostpath-provisioner                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nephemeral-726                        56s         Normal    Pulled                     pod/csi-hostpath-resizer-0                                       Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\nephemeral-726                        56s         Normal    Created                    pod/csi-hostpath-resizer-0                                       Created container csi-resizer\nephemeral-726                        54s         Normal    Started                    pod/csi-hostpath-resizer-0                                       Started container csi-resizer\nephemeral-726                        60s         Warning   FailedCreate               statefulset/csi-hostpath-resizer                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-726                        60s         Normal    SuccessfulCreate           statefulset/csi-hostpath-resizer                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nephemeral-726                        59s         Normal    Pulled                     pod/csi-hostpathplugin-0                                         Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nephemeral-726                        59s         Normal    Created                    pod/csi-hostpathplugin-0                                         Created container node-driver-registrar\nephemeral-726                        58s         Normal    Started                    pod/csi-hostpathplugin-0                                         Started container node-driver-registrar\nephemeral-726                        58s         Normal    Pulled                     pod/csi-hostpathplugin-0                                         Container image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\" already present on machine\nephemeral-726                        58s         Normal    Created                    pod/csi-hostpathplugin-0                                         Created container hostpath\nephemeral-726                        56s         Normal    Started                    pod/csi-hostpathplugin-0                                         Started container hostpath\nephemeral-726                        56s         Normal    Pulled                     pod/csi-hostpathplugin-0                                         Container image \"quay.io/k8scsi/livenessprobe:v1.1.0\" already present on machine\nephemeral-726                        56s         Normal    Created                    pod/csi-hostpathplugin-0                                         Created container liveness-probe\nephemeral-726                        54s         Normal    Started                    pod/csi-hostpathplugin-0                                         Started container liveness-probe\nephemeral-726                        62s         Normal    SuccessfulCreate           statefulset/csi-hostpathplugin                                   create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nephemeral-726                        56s         Normal    Pulled                     pod/csi-snapshotter-0                                            Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nephemeral-726                        55s         Normal    Created                    pod/csi-snapshotter-0                                            Created container csi-snapshotter\nephemeral-726                        54s         Normal    Started                    pod/csi-snapshotter-0                                            Started container csi-snapshotter\nephemeral-726                        60s         Normal    SuccessfulCreate           statefulset/csi-snapshotter                                      create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nephemeral-726                        57s         Warning   FailedMount                pod/inline-volume-tester-pszhq                                   MountVolume.SetUp failed for volume \"my-volume-1\" : kubernetes.io/csi: mounter.SetUpAt failed to get CSI client: driver name csi-hostpath-ephemeral-726 not found in the list of registered CSI drivers\nephemeral-726                        56s         Warning   FailedMount                pod/inline-volume-tester-pszhq                                   MountVolume.SetUp failed for volume \"my-volume-0\" : kubernetes.io/csi: mounter.SetUpAt failed to get CSI client: driver name csi-hostpath-ephemeral-726 not found in the list of registered CSI drivers\nephemeral-726                        52s         Normal    Pulled                     pod/inline-volume-tester-pszhq                                   Container image \"docker.io/library/busybox:1.29\" already present on machine\nephemeral-726                        51s         Normal    Created                    pod/inline-volume-tester-pszhq                                   Created container csi-volume-tester\nephemeral-726                        51s         Normal    Started                    pod/inline-volume-tester-pszhq                                   Started container csi-volume-tester\nephemeral-726                        41s         Normal    Killing                    pod/inline-volume-tester-pszhq                                   Stopping container csi-volume-tester\nhostpath-3559                        42s         Normal    Scheduled                  pod/pod-host-path-test                                           Successfully assigned hostpath-3559/pod-host-path-test to bootstrap-e2e-minion-group-zzr9\nhostpath-3559                        41s         Normal    Pulled                     pod/pod-host-path-test                                           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nhostpath-3559                        40s         Normal    Created                    pod/pod-host-path-test                                           Created container test-container-1\nhostpath-3559                        40s         Normal    Started                    pod/pod-host-path-test                                           Started container test-container-1\nhostpath-3559                        40s         Normal    Pulled                     pod/pod-host-path-test                                           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nhostpath-3559                        40s         Normal    Created                    pod/pod-host-path-test                                           Created container test-container-2\nhostpath-3559                        40s         Normal    Started                    pod/pod-host-path-test                                           Started container test-container-2\njob-2435                             36s         Normal    Scheduled                  pod/all-pods-removed-g9rzm                                       Successfully assigned job-2435/all-pods-removed-g9rzm to bootstrap-e2e-minion-group-w9fq\njob-2435                             34s         Normal    Pulled                     pod/all-pods-removed-g9rzm                                       Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-2435                             34s         Normal    Created                    pod/all-pods-removed-g9rzm                                       Created container c\njob-2435                             34s         Normal    Started                    pod/all-pods-removed-g9rzm                                       Started container c\njob-2435                             31s         Normal    Killing                    pod/all-pods-removed-g9rzm                                       Stopping container c\njob-2435                             37s         Normal    Scheduled                  pod/all-pods-removed-pv5sk                                       Successfully assigned job-2435/all-pods-removed-pv5sk to bootstrap-e2e-minion-group-w9fq\njob-2435                             34s         Normal    Pulled                     pod/all-pods-removed-pv5sk                                       Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-2435                             34s         Normal    Created                    pod/all-pods-removed-pv5sk                                       Created container c\njob-2435                             34s         Normal    Started                    pod/all-pods-removed-pv5sk                                       Started container c\njob-2435                             31s         Normal    Killing                    pod/all-pods-removed-pv5sk                                       Stopping container c\njob-2435                             37s         Normal    SuccessfulCreate           job/all-pods-removed                                             Created pod: all-pods-removed-pv5sk\njob-2435                             37s         Normal    SuccessfulCreate           job/all-pods-removed                                             Created pod: all-pods-removed-g9rzm\nkube-system                          8m35s       Warning   FailedScheduling           pod/coredns-65567c7b57-5ngxt                                     no nodes available to schedule pods\nkube-system                          8m27s       Warning   FailedScheduling           pod/coredns-65567c7b57-5ngxt                                     0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m25s       Warning   FailedScheduling           pod/coredns-65567c7b57-5ngxt                                     0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m19s       Warning   FailedScheduling           pod/coredns-65567c7b57-5ngxt                                     0/4 nodes are available: 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m11s       Warning   FailedScheduling           pod/coredns-65567c7b57-5ngxt                                     0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m          Normal    Scheduled                  pod/coredns-65567c7b57-5ngxt                                     Successfully assigned kube-system/coredns-65567c7b57-5ngxt to bootstrap-e2e-minion-group-6tqd\nkube-system                          7m59s       Normal    Pulling                    pod/coredns-65567c7b57-5ngxt                                     Pulling image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          7m58s       Normal    Pulled                     pod/coredns-65567c7b57-5ngxt                                     Successfully pulled image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          7m57s       Normal    Created                    pod/coredns-65567c7b57-5ngxt                                     Created container coredns\nkube-system                          7m57s       Normal    Started                    pod/coredns-65567c7b57-5ngxt                                     Started container coredns\nkube-system                          8m3s        Normal    Scheduled                  pod/coredns-65567c7b57-vfjh7                                     Successfully assigned kube-system/coredns-65567c7b57-vfjh7 to bootstrap-e2e-minion-group-d58v\nkube-system                          8m2s        Normal    Pulling                    pod/coredns-65567c7b57-vfjh7                                     Pulling image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          8m1s        Normal    Pulled                     pod/coredns-65567c7b57-vfjh7                                     Successfully pulled image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          8m1s        Normal    Created                    pod/coredns-65567c7b57-vfjh7                                     Created container coredns\nkube-system                          8m          Normal    Started                    pod/coredns-65567c7b57-vfjh7                                     Started container coredns\nkube-system                          8m39s       Warning   FailedCreate               replicaset/coredns-65567c7b57                                    Error creating: pods \"coredns-65567c7b57-\" is forbidden: no providers available to validate pod request\nkube-system                          8m38s       Warning   FailedCreate               replicaset/coredns-65567c7b57                                    Error creating: pods \"coredns-65567c7b57-\" is forbidden: unable to validate against any pod security policy: []\nkube-system                          8m36s       Normal    SuccessfulCreate           replicaset/coredns-65567c7b57                                    Created pod: coredns-65567c7b57-5ngxt\nkube-system                          8m3s        Normal    SuccessfulCreate           replicaset/coredns-65567c7b57                                    Created pod: coredns-65567c7b57-vfjh7\nkube-system                          8m41s       Normal    ScalingReplicaSet          deployment/coredns                                               Scaled up replica set coredns-65567c7b57 to 1\nkube-system                          8m3s        Normal    ScalingReplicaSet          deployment/coredns                                               Scaled up replica set coredns-65567c7b57 to 2\nkube-system                          8m36s       Warning   FailedScheduling           pod/event-exporter-v0.3.1-747b47fcd-h9n4q                        no nodes available to schedule pods\nkube-system                          8m15s       Warning   FailedScheduling           pod/event-exporter-v0.3.1-747b47fcd-h9n4q                        0/4 nodes are available: 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m7s        Normal    Scheduled                  pod/event-exporter-v0.3.1-747b47fcd-h9n4q                        Successfully assigned kube-system/event-exporter-v0.3.1-747b47fcd-h9n4q to bootstrap-e2e-minion-group-zzr9\nkube-system                          8m6s        Normal    Pulling                    pod/event-exporter-v0.3.1-747b47fcd-h9n4q                        Pulling image \"k8s.gcr.io/event-exporter:v0.3.1\"\nkube-system                          8m3s        Normal    Pulled                     pod/event-exporter-v0.3.1-747b47fcd-h9n4q                        Successfully pulled image \"k8s.gcr.io/event-exporter:v0.3.1\"\nkube-system                          8m3s        Normal    Created                    pod/event-exporter-v0.3.1-747b47fcd-h9n4q                        Created container event-exporter\nkube-system                          8m2s        Normal    Started                    pod/event-exporter-v0.3.1-747b47fcd-h9n4q                        Started container event-exporter\nkube-system                          8m2s        Normal    Pulling                    pod/event-exporter-v0.3.1-747b47fcd-h9n4q                        Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.7.2\"\nkube-system                          8m1s        Normal    Pulled                     pod/event-exporter-v0.3.1-747b47fcd-h9n4q                        Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.7.2\"\nkube-system                          8m1s        Normal    Created                    pod/event-exporter-v0.3.1-747b47fcd-h9n4q                        Created container prometheus-to-sd-exporter\nkube-system                          8m1s        Normal    Started                    pod/event-exporter-v0.3.1-747b47fcd-h9n4q                        Started container prometheus-to-sd-exporter\nkube-system                          8m39s       Normal    SuccessfulCreate           replicaset/event-exporter-v0.3.1-747b47fcd                       Created pod: event-exporter-v0.3.1-747b47fcd-h9n4q\nkube-system                          8m39s       Normal    ScalingReplicaSet          deployment/event-exporter-v0.3.1                                 Scaled up replica set event-exporter-v0.3.1-747b47fcd to 1\nkube-system                          8m32s       Warning   FailedScheduling           pod/fluentd-gcp-scaler-76d9c77b4d-x9pzf                          no nodes available to schedule pods\nkube-system                          8m16s       Warning   FailedScheduling           pod/fluentd-gcp-scaler-76d9c77b4d-x9pzf                          0/4 nodes are available: 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m8s        Normal    Scheduled                  pod/fluentd-gcp-scaler-76d9c77b4d-x9pzf                          Successfully assigned kube-system/fluentd-gcp-scaler-76d9c77b4d-x9pzf to bootstrap-e2e-minion-group-w9fq\nkube-system                          8m6s        Normal    Pulling                    pod/fluentd-gcp-scaler-76d9c77b4d-x9pzf                          Pulling image \"k8s.gcr.io/fluentd-gcp-scaler:0.5.2\"\nkube-system                          8m2s        Normal    Pulled                     pod/fluentd-gcp-scaler-76d9c77b4d-x9pzf                          Successfully pulled image \"k8s.gcr.io/fluentd-gcp-scaler:0.5.2\"\nkube-system                          8m1s        Normal    Created                    pod/fluentd-gcp-scaler-76d9c77b4d-x9pzf                          Created container fluentd-gcp-scaler\nkube-system                          8m1s        Normal    Started                    pod/fluentd-gcp-scaler-76d9c77b4d-x9pzf                          Started container fluentd-gcp-scaler\nkube-system                          8m32s       Normal    SuccessfulCreate           replicaset/fluentd-gcp-scaler-76d9c77b4d                         Created pod: fluentd-gcp-scaler-76d9c77b4d-x9pzf\nkube-system                          8m32s       Normal    ScalingReplicaSet          deployment/fluentd-gcp-scaler                                    Scaled up replica set fluentd-gcp-scaler-76d9c77b4d to 1\nkube-system                          8m25s       Normal    Scheduled                  pod/fluentd-gcp-v3.2.0-4r8cn                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-4r8cn to bootstrap-e2e-minion-group-6tqd\nkube-system                          8m23s       Normal    Pulling                    pod/fluentd-gcp-v3.2.0-4r8cn                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          8m13s       Normal    Pulled                     pod/fluentd-gcp-v3.2.0-4r8cn                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          8m13s       Normal    Created                    pod/fluentd-gcp-v3.2.0-4r8cn                                     Created container fluentd-gcp\nkube-system                          8m13s       Normal    Started                    pod/fluentd-gcp-v3.2.0-4r8cn                                     Started container fluentd-gcp\nkube-system                          8m13s       Normal    Pulled                     pod/fluentd-gcp-v3.2.0-4r8cn                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          8m13s       Normal    Created                    pod/fluentd-gcp-v3.2.0-4r8cn                                     Created container prometheus-to-sd-exporter\nkube-system                          8m13s       Normal    Started                    pod/fluentd-gcp-v3.2.0-4r8cn                                     Started container prometheus-to-sd-exporter\nkube-system                          7m25s       Normal    Killing                    pod/fluentd-gcp-v3.2.0-4r8cn                                     Stopping container fluentd-gcp\nkube-system                          7m25s       Normal    Killing                    pod/fluentd-gcp-v3.2.0-4r8cn                                     Stopping container prometheus-to-sd-exporter\nkube-system                          7m1s        Normal    Scheduled                  pod/fluentd-gcp-v3.2.0-6cvd6                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-6cvd6 to bootstrap-e2e-minion-group-zzr9\nkube-system                          7m          Normal    Pulled                     pod/fluentd-gcp-v3.2.0-6cvd6                                     Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          7m          Normal    Created                    pod/fluentd-gcp-v3.2.0-6cvd6                                     Created container fluentd-gcp\nkube-system                          7m          Normal    Started                    pod/fluentd-gcp-v3.2.0-6cvd6                                     Started container fluentd-gcp\nkube-system                          7m          Normal    Pulled                     pod/fluentd-gcp-v3.2.0-6cvd6                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          7m          Normal    Created                    pod/fluentd-gcp-v3.2.0-6cvd6                                     Created container prometheus-to-sd-exporter\nkube-system                          6m58s       Normal    Started                    pod/fluentd-gcp-v3.2.0-6cvd6                                     Started container prometheus-to-sd-exporter\nkube-system                          8m26s       Normal    Scheduled                  pod/fluentd-gcp-v3.2.0-frtqs                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-frtqs to bootstrap-e2e-minion-group-w9fq\nkube-system                          8m24s       Normal    Pulling                    pod/fluentd-gcp-v3.2.0-frtqs                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          8m15s       Normal    Pulled                     pod/fluentd-gcp-v3.2.0-frtqs                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          8m14s       Normal    Created                    pod/fluentd-gcp-v3.2.0-frtqs                                     Created container fluentd-gcp\nkube-system                          8m14s       Normal    Started                    pod/fluentd-gcp-v3.2.0-frtqs                                     Started container fluentd-gcp\nkube-system                          8m14s       Normal    Pulled                     pod/fluentd-gcp-v3.2.0-frtqs                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          8m14s       Normal    Created                    pod/fluentd-gcp-v3.2.0-frtqs                                     Created container prometheus-to-sd-exporter\nkube-system                          8m14s       Normal    Started                    pod/fluentd-gcp-v3.2.0-frtqs                                     Started container prometheus-to-sd-exporter\nkube-system                          7m13s       Normal    Killing                    pod/fluentd-gcp-v3.2.0-frtqs                                     Stopping container fluentd-gcp\nkube-system                          7m13s       Normal    Killing                    pod/fluentd-gcp-v3.2.0-frtqs                                     Stopping container prometheus-to-sd-exporter\nkube-system                          7m6s        Normal    Scheduled                  pod/fluentd-gcp-v3.2.0-g8dmm                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-g8dmm to bootstrap-e2e-minion-group-w9fq\nkube-system                          7m6s        Normal    Pulled                     pod/fluentd-gcp-v3.2.0-g8dmm                                     Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          7m5s        Normal    Created                    pod/fluentd-gcp-v3.2.0-g8dmm                                     Created container fluentd-gcp\nkube-system                          7m5s        Normal    Started                    pod/fluentd-gcp-v3.2.0-g8dmm                                     Started container fluentd-gcp\nkube-system                          7m5s        Normal    Pulled                     pod/fluentd-gcp-v3.2.0-g8dmm                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          7m5s        Normal    Created                    pod/fluentd-gcp-v3.2.0-g8dmm                                     Created container prometheus-to-sd-exporter\nkube-system                          7m5s        Normal    Started                    pod/fluentd-gcp-v3.2.0-g8dmm                                     Started container prometheus-to-sd-exporter\nkube-system                          8m26s       Normal    Scheduled                  pod/fluentd-gcp-v3.2.0-kclp7                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-kclp7 to bootstrap-e2e-minion-group-d58v\nkube-system                          8m25s       Normal    Pulling                    pod/fluentd-gcp-v3.2.0-kclp7                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          8m15s       Normal    Pulled                     pod/fluentd-gcp-v3.2.0-kclp7                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          8m15s       Normal    Created                    pod/fluentd-gcp-v3.2.0-kclp7                                     Created container fluentd-gcp\nkube-system                          8m15s       Normal    Started                    pod/fluentd-gcp-v3.2.0-kclp7                                     Started container fluentd-gcp\nkube-system                          8m15s       Normal    Pulled                     pod/fluentd-gcp-v3.2.0-kclp7                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          8m15s       Normal    Created                    pod/fluentd-gcp-v3.2.0-kclp7                                     Created container prometheus-to-sd-exporter\nkube-system                          8m14s       Normal    Started                    pod/fluentd-gcp-v3.2.0-kclp7                                     Started container prometheus-to-sd-exporter\nkube-system                          7m37s       Normal    Killing                    pod/fluentd-gcp-v3.2.0-kclp7                                     Stopping container fluentd-gcp\nkube-system                          7m37s       Normal    Killing                    pod/fluentd-gcp-v3.2.0-kclp7                                     Stopping container prometheus-to-sd-exporter\nkube-system                          7m27s       Normal    Scheduled                  pod/fluentd-gcp-v3.2.0-nrd82                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-nrd82 to bootstrap-e2e-minion-group-d58v\nkube-system                          7m26s       Normal    Pulled                     pod/fluentd-gcp-v3.2.0-nrd82                                     Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          7m26s       Normal    Created                    pod/fluentd-gcp-v3.2.0-nrd82                                     Created container fluentd-gcp\nkube-system                          7m26s       Normal    Started                    pod/fluentd-gcp-v3.2.0-nrd82                                     Started container fluentd-gcp\nkube-system                          7m26s       Normal    Pulled                     pod/fluentd-gcp-v3.2.0-nrd82                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          7m26s       Normal    Created                    pod/fluentd-gcp-v3.2.0-nrd82                                     Created container prometheus-to-sd-exporter\nkube-system                          7m25s       Normal    Started                    pod/fluentd-gcp-v3.2.0-nrd82                                     Started container prometheus-to-sd-exporter\nkube-system                          8m13s       Normal    Scheduled                  pod/fluentd-gcp-v3.2.0-p6gxp                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-p6gxp to bootstrap-e2e-master\nkube-system                          8m11s       Warning   FailedMount                pod/fluentd-gcp-v3.2.0-p6gxp                                     MountVolume.SetUp failed for volume \"fluentd-gcp-token-qhbhl\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          8m7s        Normal    Pulling                    pod/fluentd-gcp-v3.2.0-p6gxp                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          7m46s       Normal    Pulled                     pod/fluentd-gcp-v3.2.0-p6gxp                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          7m46s       Warning   Failed                     pod/fluentd-gcp-v3.2.0-p6gxp                                     Error: cannot find volume \"varlog\" to mount into container \"fluentd-gcp\"\nkube-system                          7m46s       Normal    Pulled                     pod/fluentd-gcp-v3.2.0-p6gxp                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          7m46s       Warning   Failed                     pod/fluentd-gcp-v3.2.0-p6gxp                                     Error: cannot find volume \"fluentd-gcp-token-qhbhl\" to mount into container \"prometheus-to-sd-exporter\"\nkube-system                          8m25s       Normal    Scheduled                  pod/fluentd-gcp-v3.2.0-q4kb5                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-q4kb5 to bootstrap-e2e-minion-group-zzr9\nkube-system                          8m23s       Normal    Pulling                    pod/fluentd-gcp-v3.2.0-q4kb5                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          8m13s       Normal    Pulled                     pod/fluentd-gcp-v3.2.0-q4kb5                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          8m13s       Normal    Created                    pod/fluentd-gcp-v3.2.0-q4kb5                                     Created container fluentd-gcp\nkube-system                          8m13s       Normal    Started                    pod/fluentd-gcp-v3.2.0-q4kb5                                     Started container fluentd-gcp\nkube-system                          8m13s       Normal    Pulled                     pod/fluentd-gcp-v3.2.0-q4kb5                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          8m13s       Normal    Created                    pod/fluentd-gcp-v3.2.0-q4kb5                                     Created container prometheus-to-sd-exporter\nkube-system                          8m12s       Normal    Started                    pod/fluentd-gcp-v3.2.0-q4kb5                                     Started container prometheus-to-sd-exporter\nkube-system                          7m4s        Normal    Killing                    pod/fluentd-gcp-v3.2.0-q4kb5                                     Stopping container fluentd-gcp\nkube-system                          7m4s        Normal    Killing                    pod/fluentd-gcp-v3.2.0-q4kb5                                     Stopping container prometheus-to-sd-exporter\nkube-system                          7m55s       Normal    Scheduled                  pod/fluentd-gcp-v3.2.0-tz8kx                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-tz8kx to bootstrap-e2e-master\nkube-system                          7m53s       Normal    Pulling                    pod/fluentd-gcp-v3.2.0-tz8kx                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          7m46s       Normal    Pulled                     pod/fluentd-gcp-v3.2.0-tz8kx                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          7m44s       Normal    Created                    pod/fluentd-gcp-v3.2.0-tz8kx                                     Created container fluentd-gcp\nkube-system                          7m43s       Normal    Started                    pod/fluentd-gcp-v3.2.0-tz8kx                                     Started container fluentd-gcp\nkube-system                          7m43s       Normal    Pulled                     pod/fluentd-gcp-v3.2.0-tz8kx                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          7m43s       Normal    Created                    pod/fluentd-gcp-v3.2.0-tz8kx                                     Created container prometheus-to-sd-exporter\nkube-system                          7m38s       Normal    Started                    pod/fluentd-gcp-v3.2.0-tz8kx                                     Started container prometheus-to-sd-exporter\nkube-system                          7m15s       Normal    Scheduled                  pod/fluentd-gcp-v3.2.0-x9nbv                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-x9nbv to bootstrap-e2e-minion-group-6tqd\nkube-system                          7m15s       Normal    Pulled                     pod/fluentd-gcp-v3.2.0-x9nbv                                     Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          7m15s       Normal    Created                    pod/fluentd-gcp-v3.2.0-x9nbv                                     Created container fluentd-gcp\nkube-system                          7m14s       Normal    Started                    pod/fluentd-gcp-v3.2.0-x9nbv                                     Started container fluentd-gcp\nkube-system                          7m14s       Normal    Pulled                     pod/fluentd-gcp-v3.2.0-x9nbv                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          7m14s       Normal    Created                    pod/fluentd-gcp-v3.2.0-x9nbv                                     Created container prometheus-to-sd-exporter\nkube-system                          7m14s       Normal    Started                    pod/fluentd-gcp-v3.2.0-x9nbv                                     Started container prometheus-to-sd-exporter\nkube-system                          8m26s       Normal    SuccessfulCreate           daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-kclp7\nkube-system                          8m26s       Normal    SuccessfulCreate           daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-frtqs\nkube-system                          8m25s       Normal    SuccessfulCreate           daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-q4kb5\nkube-system                          8m25s       Normal    SuccessfulCreate           daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-4r8cn\nkube-system                          8m13s       Normal    SuccessfulCreate           daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-p6gxp\nkube-system                          7m58s       Normal    SuccessfulDelete           daemonset/fluentd-gcp-v3.2.0                                     Deleted pod: fluentd-gcp-v3.2.0-p6gxp\nkube-system                          7m55s       Normal    SuccessfulCreate           daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-tz8kx\nkube-system                          7m37s       Normal    SuccessfulDelete           daemonset/fluentd-gcp-v3.2.0                                     Deleted pod: fluentd-gcp-v3.2.0-kclp7\nkube-system                          7m27s       Normal    SuccessfulCreate           daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-nrd82\nkube-system                          7m25s       Normal    SuccessfulDelete           daemonset/fluentd-gcp-v3.2.0                                     Deleted pod: fluentd-gcp-v3.2.0-4r8cn\nkube-system                          7m15s       Normal    SuccessfulCreate           daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-x9nbv\nkube-system                          7m13s       Normal    SuccessfulDelete           daemonset/fluentd-gcp-v3.2.0                                     Deleted pod: fluentd-gcp-v3.2.0-frtqs\nkube-system                          7m6s        Normal    SuccessfulCreate           daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-g8dmm\nkube-system                          7m4s        Normal    SuccessfulDelete           daemonset/fluentd-gcp-v3.2.0                                     Deleted pod: fluentd-gcp-v3.2.0-q4kb5\nkube-system                          7m1s        Normal    SuccessfulCreate           daemonset/fluentd-gcp-v3.2.0                                     (combined from similar events): Created pod: fluentd-gcp-v3.2.0-6cvd6\nkube-system                          8m19s       Normal    LeaderElection             configmap/ingress-gce-lock                                       bootstrap-e2e-master_6058e became leader\nkube-system                          8m59s       Normal    LeaderElection             endpoints/kube-controller-manager                                bootstrap-e2e-master_dfd43f4c-fc6d-4c2e-97f8-d500838bd8d4 became leader\nkube-system                          8m59s       Normal    LeaderElection             lease/kube-controller-manager                                    bootstrap-e2e-master_dfd43f4c-fc6d-4c2e-97f8-d500838bd8d4 became leader\nkube-system                          8m28s       Warning   FailedScheduling           pod/kube-dns-autoscaler-65bc6d4889-7kfcx                         no nodes available to schedule pods\nkube-system                          8m25s       Warning   FailedScheduling           pod/kube-dns-autoscaler-65bc6d4889-7kfcx                         0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m17s       Warning   FailedScheduling           pod/kube-dns-autoscaler-65bc6d4889-7kfcx                         0/4 nodes are available: 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m8s        Normal    Scheduled                  pod/kube-dns-autoscaler-65bc6d4889-7kfcx                         Successfully assigned kube-system/kube-dns-autoscaler-65bc6d4889-7kfcx to bootstrap-e2e-minion-group-d58v\nkube-system                          8m6s        Normal    Pulling                    pod/kube-dns-autoscaler-65bc6d4889-7kfcx                         Pulling image \"k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1\"\nkube-system                          8m4s        Normal    Pulled                     pod/kube-dns-autoscaler-65bc6d4889-7kfcx                         Successfully pulled image \"k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1\"\nkube-system                          8m4s        Normal    Created                    pod/kube-dns-autoscaler-65bc6d4889-7kfcx                         Created container autoscaler\nkube-system                          8m4s        Normal    Started                    pod/kube-dns-autoscaler-65bc6d4889-7kfcx                         Started container autoscaler\nkube-system                          8m33s       Warning   FailedCreate               replicaset/kube-dns-autoscaler-65bc6d4889                        Error creating: pods \"kube-dns-autoscaler-65bc6d4889-\" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount \"kube-dns-autoscaler\" not found\nkube-system                          8m28s       Normal    SuccessfulCreate           replicaset/kube-dns-autoscaler-65bc6d4889                        Created pod: kube-dns-autoscaler-65bc6d4889-7kfcx\nkube-system                          8m39s       Normal    ScalingReplicaSet          deployment/kube-dns-autoscaler                                   Scaled up replica set kube-dns-autoscaler-65bc6d4889 to 1\nkube-system                          8m24s       Normal    Pulled                     pod/kube-proxy-bootstrap-e2e-minion-group-6tqd                   Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.836_6413f1ee2be99f\" already present on machine\nkube-system                          8m24s       Normal    Created                    pod/kube-proxy-bootstrap-e2e-minion-group-6tqd                   Created container kube-proxy\nkube-system                          8m23s       Normal    Started                    pod/kube-proxy-bootstrap-e2e-minion-group-6tqd                   Started container kube-proxy\nkube-system                          8m25s       Normal    Pulled                     pod/kube-proxy-bootstrap-e2e-minion-group-d58v                   Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.836_6413f1ee2be99f\" already present on machine\nkube-system                          8m25s       Normal    Created                    pod/kube-proxy-bootstrap-e2e-minion-group-d58v                   Created container kube-proxy\nkube-system                          8m25s       Normal    Started                    pod/kube-proxy-bootstrap-e2e-minion-group-d58v                   Started container kube-proxy\nkube-system                          8m24s       Normal    Pulled                     pod/kube-proxy-bootstrap-e2e-minion-group-w9fq                   Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.836_6413f1ee2be99f\" already present on machine\nkube-system                          8m24s       Normal    Created                    pod/kube-proxy-bootstrap-e2e-minion-group-w9fq                   Created container kube-proxy\nkube-system                          8m24s       Normal    Started                    pod/kube-proxy-bootstrap-e2e-minion-group-w9fq                   Started container kube-proxy\nkube-system                          8m23s       Normal    Pulled                     pod/kube-proxy-bootstrap-e2e-minion-group-zzr9                   Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.836_6413f1ee2be99f\" already present on machine\nkube-system                          8m23s       Normal    Created                    pod/kube-proxy-bootstrap-e2e-minion-group-zzr9                   Created container kube-proxy\nkube-system                          8m23s       Normal    Started                    pod/kube-proxy-bootstrap-e2e-minion-group-zzr9                   Started container kube-proxy\nkube-system                          9m          Normal    LeaderElection             endpoints/kube-scheduler                                         bootstrap-e2e-master_fda74be5-fefc-48b4-a9ac-098fd5a86abf became leader\nkube-system                          9m          Normal    LeaderElection             lease/kube-scheduler                                             bootstrap-e2e-master_fda74be5-fefc-48b4-a9ac-098fd5a86abf became leader\nkube-system                          8m32s       Warning   FailedScheduling           pod/kubernetes-dashboard-7778f8b456-rw5w4                        no nodes available to schedule pods\nkube-system                          8m26s       Warning   FailedScheduling           pod/kubernetes-dashboard-7778f8b456-rw5w4                        0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m18s       Warning   FailedScheduling           pod/kubernetes-dashboard-7778f8b456-rw5w4                        0/4 nodes are available: 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m9s        Warning   FailedScheduling           pod/kubernetes-dashboard-7778f8b456-rw5w4                        0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          7m59s       Normal    Scheduled                  pod/kubernetes-dashboard-7778f8b456-rw5w4                        Successfully assigned kube-system/kubernetes-dashboard-7778f8b456-rw5w4 to bootstrap-e2e-minion-group-w9fq\nkube-system                          7m57s       Normal    Pulling                    pod/kubernetes-dashboard-7778f8b456-rw5w4                        Pulling image \"k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1\"\nkube-system                          7m54s       Normal    Pulled                     pod/kubernetes-dashboard-7778f8b456-rw5w4                        Successfully pulled image \"k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1\"\nkube-system                          7m53s       Normal    Created                    pod/kubernetes-dashboard-7778f8b456-rw5w4                        Created container kubernetes-dashboard\nkube-system                          7m52s       Normal    Started                    pod/kubernetes-dashboard-7778f8b456-rw5w4                        Started container kubernetes-dashboard\nkube-system                          8m32s       Normal    SuccessfulCreate           replicaset/kubernetes-dashboard-7778f8b456                       Created pod: kubernetes-dashboard-7778f8b456-rw5w4\nkube-system                          8m32s       Normal    ScalingReplicaSet          deployment/kubernetes-dashboard                                  Scaled up replica set kubernetes-dashboard-7778f8b456 to 1\nkube-system                          8m35s       Warning   FailedScheduling           pod/l7-default-backend-678889f899-zxxsk                          no nodes available to schedule pods\nkube-system                          8m16s       Warning   FailedScheduling           pod/l7-default-backend-678889f899-zxxsk                          0/4 nodes are available: 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m12s       Warning   FailedScheduling           pod/l7-default-backend-678889f899-zxxsk                          0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m4s        Normal    Scheduled                  pod/l7-default-backend-678889f899-zxxsk                          Successfully assigned kube-system/l7-default-backend-678889f899-zxxsk to bootstrap-e2e-minion-group-d58v\nkube-system                          7m56s       Normal    Pulling                    pod/l7-default-backend-678889f899-zxxsk                          Pulling image \"k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0\"\nkube-system                          7m54s       Normal    Pulled                     pod/l7-default-backend-678889f899-zxxsk                          Successfully pulled image \"k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0\"\nkube-system                          7m54s       Normal    Created                    pod/l7-default-backend-678889f899-zxxsk                          Created container default-http-backend\nkube-system                          7m47s       Normal    Started                    pod/l7-default-backend-678889f899-zxxsk                          Started container default-http-backend\nkube-system                          8m39s       Warning   FailedCreate               replicaset/l7-default-backend-678889f899                         Error creating: pods \"l7-default-backend-678889f899-\" is forbidden: no providers available to validate pod request\nkube-system                          8m38s       Warning   FailedCreate               replicaset/l7-default-backend-678889f899                         Error creating: pods \"l7-default-backend-678889f899-\" is forbidden: unable to validate against any pod security policy: []\nkube-system                          8m35s       Normal    SuccessfulCreate           replicaset/l7-default-backend-678889f899                         Created pod: l7-default-backend-678889f899-zxxsk\nkube-system                          8m40s       Normal    ScalingReplicaSet          deployment/l7-default-backend                                    Scaled up replica set l7-default-backend-678889f899 to 1\nkube-system                          8m29s       Normal    Created                    pod/l7-lb-controller-bootstrap-e2e-master                        Created container l7-lb-controller\nkube-system                          8m28s       Normal    Started                    pod/l7-lb-controller-bootstrap-e2e-master                        Started container l7-lb-controller\nkube-system                          8m30s       Normal    Pulled                     pod/l7-lb-controller-bootstrap-e2e-master                        Container image \"k8s.gcr.io/ingress-gce-glbc-amd64:v1.6.1\" already present on machine\nkube-system                          8m13s       Normal    Scheduled                  pod/metadata-proxy-v0.1-8hdgh                                    Successfully assigned kube-system/metadata-proxy-v0.1-8hdgh to bootstrap-e2e-master\nkube-system                          8m11s       Warning   FailedMount                pod/metadata-proxy-v0.1-8hdgh                                    MountVolume.SetUp failed for volume \"metadata-proxy-token-jbctk\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          8m9s        Normal    Pulling                    pod/metadata-proxy-v0.1-8hdgh                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          8m8s        Normal    Pulled                     pod/metadata-proxy-v0.1-8hdgh                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          8m7s        Normal    Created                    pod/metadata-proxy-v0.1-8hdgh                                    Created container metadata-proxy\nkube-system                          8m5s        Normal    Started                    pod/metadata-proxy-v0.1-8hdgh                                    Started container metadata-proxy\nkube-system                          8m5s        Normal    Pulling                    pod/metadata-proxy-v0.1-8hdgh                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          8m3s        Normal    Pulled                     pod/metadata-proxy-v0.1-8hdgh                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          8m          Normal    Created                    pod/metadata-proxy-v0.1-8hdgh                                    Created container prometheus-to-sd-exporter\nkube-system                          7m58s       Normal    Started                    pod/metadata-proxy-v0.1-8hdgh                                    Started container prometheus-to-sd-exporter\nkube-system                          8m25s       Normal    Scheduled                  pod/metadata-proxy-v0.1-jf7n6                                    Successfully assigned kube-system/metadata-proxy-v0.1-jf7n6 to bootstrap-e2e-minion-group-w9fq\nkube-system                          8m23s       Normal    Pulling                    pod/metadata-proxy-v0.1-jf7n6                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          8m22s       Normal    Pulled                     pod/metadata-proxy-v0.1-jf7n6                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          8m20s       Normal    Created                    pod/metadata-proxy-v0.1-jf7n6                                    Created container metadata-proxy\nkube-system                          8m19s       Normal    Started                    pod/metadata-proxy-v0.1-jf7n6                                    Started container metadata-proxy\nkube-system                          8m19s       Normal    Pulling                    pod/metadata-proxy-v0.1-jf7n6                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          8m18s       Normal    Pulled                     pod/metadata-proxy-v0.1-jf7n6                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          8m16s       Normal    Created                    pod/metadata-proxy-v0.1-jf7n6                                    Created container prometheus-to-sd-exporter\nkube-system                          8m15s       Normal    Started                    pod/metadata-proxy-v0.1-jf7n6                                    Started container prometheus-to-sd-exporter\nkube-system                          8m26s       Normal    Scheduled                  pod/metadata-proxy-v0.1-tjqp5                                    Successfully assigned kube-system/metadata-proxy-v0.1-tjqp5 to bootstrap-e2e-minion-group-d58v\nkube-system                          8m24s       Normal    Pulling                    pod/metadata-proxy-v0.1-tjqp5                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          8m22s       Normal    Pulled                     pod/metadata-proxy-v0.1-tjqp5                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          8m20s       Normal    Created                    pod/metadata-proxy-v0.1-tjqp5                                    Created container metadata-proxy\nkube-system                          8m19s       Normal    Started                    pod/metadata-proxy-v0.1-tjqp5                                    Started container metadata-proxy\nkube-system                          8m19s       Normal    Pulling                    pod/metadata-proxy-v0.1-tjqp5                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          8m17s       Normal    Pulled                     pod/metadata-proxy-v0.1-tjqp5                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          8m15s       Normal    Created                    pod/metadata-proxy-v0.1-tjqp5                                    Created container prometheus-to-sd-exporter\nkube-system                          8m14s       Normal    Started                    pod/metadata-proxy-v0.1-tjqp5                                    Started container prometheus-to-sd-exporter\nkube-system                          8m25s       Normal    Scheduled                  pod/metadata-proxy-v0.1-x8f9w                                    Successfully assigned kube-system/metadata-proxy-v0.1-x8f9w to bootstrap-e2e-minion-group-zzr9\nkube-system                          8m22s       Normal    Pulling                    pod/metadata-proxy-v0.1-x8f9w                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          8m21s       Normal    Pulled                     pod/metadata-proxy-v0.1-x8f9w                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          8m19s       Normal    Created                    pod/metadata-proxy-v0.1-x8f9w                                    Created container metadata-proxy\nkube-system                          8m18s       Normal    Started                    pod/metadata-proxy-v0.1-x8f9w                                    Started container metadata-proxy\nkube-system                          8m18s       Normal    Pulling                    pod/metadata-proxy-v0.1-x8f9w                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          8m16s       Normal    Pulled                     pod/metadata-proxy-v0.1-x8f9w                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          8m14s       Normal    Created                    pod/metadata-proxy-v0.1-x8f9w                                    Created container prometheus-to-sd-exporter\nkube-system                          8m13s       Normal    Started                    pod/metadata-proxy-v0.1-x8f9w                                    Started container prometheus-to-sd-exporter\nkube-system                          8m25s       Normal    Scheduled                  pod/metadata-proxy-v0.1-xhfwp                                    Successfully assigned kube-system/metadata-proxy-v0.1-xhfwp to bootstrap-e2e-minion-group-6tqd\nkube-system                          8m23s       Normal    Pulling                    pod/metadata-proxy-v0.1-xhfwp                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          8m21s       Normal    Pulled                     pod/metadata-proxy-v0.1-xhfwp                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          8m19s       Normal    Created                    pod/metadata-proxy-v0.1-xhfwp                                    Created container metadata-proxy\nkube-system                          8m17s       Normal    Started                    pod/metadata-proxy-v0.1-xhfwp                                    Started container metadata-proxy\nkube-system                          8m17s       Normal    Pulling                    pod/metadata-proxy-v0.1-xhfwp                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          8m16s       Normal    Pulled                     pod/metadata-proxy-v0.1-xhfwp                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          8m13s       Normal    Created                    pod/metadata-proxy-v0.1-xhfwp                                    Created container prometheus-to-sd-exporter\nkube-system                          8m12s       Normal    Started                    pod/metadata-proxy-v0.1-xhfwp                                    Started container prometheus-to-sd-exporter\nkube-system                          8m26s       Normal    SuccessfulCreate           daemonset/metadata-proxy-v0.1                                    Created pod: metadata-proxy-v0.1-tjqp5\nkube-system                          8m26s       Normal    SuccessfulCreate           daemonset/metadata-proxy-v0.1                                    Created pod: metadata-proxy-v0.1-jf7n6\nkube-system                          8m25s       Normal    SuccessfulCreate           daemonset/metadata-proxy-v0.1                                    Created pod: metadata-proxy-v0.1-xhfwp\nkube-system                          8m25s       Normal    SuccessfulCreate           daemonset/metadata-proxy-v0.1                                    Created pod: metadata-proxy-v0.1-x8f9w\nkube-system                          8m13s       Normal    SuccessfulCreate           daemonset/metadata-proxy-v0.1                                    Created pod: metadata-proxy-v0.1-8hdgh\nkube-system                          7m52s       Normal    Scheduled                  pod/metrics-server-v0.3.6-5f859c87d6-7rfvc                       Successfully assigned kube-system/metrics-server-v0.3.6-5f859c87d6-7rfvc to bootstrap-e2e-minion-group-w9fq\nkube-system                          7m51s       Normal    Pulling                    pod/metrics-server-v0.3.6-5f859c87d6-7rfvc                       Pulling image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          7m49s       Normal    Pulled                     pod/metrics-server-v0.3.6-5f859c87d6-7rfvc                       Successfully pulled image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          7m49s       Normal    Created                    pod/metrics-server-v0.3.6-5f859c87d6-7rfvc                       Created container metrics-server\nkube-system                          7m48s       Normal    Started                    pod/metrics-server-v0.3.6-5f859c87d6-7rfvc                       Started container metrics-server\nkube-system                          7m48s       Normal    Pulling                    pod/metrics-server-v0.3.6-5f859c87d6-7rfvc                       Pulling image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          7m47s       Normal    Pulled                     pod/metrics-server-v0.3.6-5f859c87d6-7rfvc                       Successfully pulled image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          7m47s       Normal    Created                    pod/metrics-server-v0.3.6-5f859c87d6-7rfvc                       Created container metrics-server-nanny\nkube-system                          7m46s       Normal    Started                    pod/metrics-server-v0.3.6-5f859c87d6-7rfvc                       Started container metrics-server-nanny\nkube-system                          7m52s       Normal    SuccessfulCreate           replicaset/metrics-server-v0.3.6-5f859c87d6                      Created pod: metrics-server-v0.3.6-5f859c87d6-7rfvc\nkube-system                          8m35s       Warning   FailedScheduling           pod/metrics-server-v0.3.6-65d4dc878-4p6mr                        no nodes available to schedule pods\nkube-system                          8m26s       Warning   FailedScheduling           pod/metrics-server-v0.3.6-65d4dc878-4p6mr                        0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m18s       Warning   FailedScheduling           pod/metrics-server-v0.3.6-65d4dc878-4p6mr                        0/4 nodes are available: 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m9s        Warning   FailedScheduling           pod/metrics-server-v0.3.6-65d4dc878-4p6mr                        0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          7m59s       Normal    Scheduled                  pod/metrics-server-v0.3.6-65d4dc878-4p6mr                        Successfully assigned kube-system/metrics-server-v0.3.6-65d4dc878-4p6mr to bootstrap-e2e-minion-group-d58v\nkube-system                          7m58s       Normal    Pulling                    pod/metrics-server-v0.3.6-65d4dc878-4p6mr                        Pulling image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          7m57s       Normal    Pulled                     pod/metrics-server-v0.3.6-65d4dc878-4p6mr                        Successfully pulled image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          7m56s       Normal    Created                    pod/metrics-server-v0.3.6-65d4dc878-4p6mr                        Created container metrics-server\nkube-system                          7m56s       Normal    Started                    pod/metrics-server-v0.3.6-65d4dc878-4p6mr                        Started container metrics-server\nkube-system                          7m56s       Normal    Pulling                    pod/metrics-server-v0.3.6-65d4dc878-4p6mr                        Pulling image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          7m53s       Normal    Pulled                     pod/metrics-server-v0.3.6-65d4dc878-4p6mr                        Successfully pulled image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          7m53s       Normal    Created                    pod/metrics-server-v0.3.6-65d4dc878-4p6mr                        Created container metrics-server-nanny\nkube-system                          7m53s       Normal    Started                    pod/metrics-server-v0.3.6-65d4dc878-4p6mr                        Started container metrics-server-nanny\nkube-system                          7m44s       Normal    Killing                    pod/metrics-server-v0.3.6-65d4dc878-4p6mr                        Stopping container metrics-server\nkube-system                          7m44s       Normal    Killing                    pod/metrics-server-v0.3.6-65d4dc878-4p6mr                        Stopping container metrics-server-nanny\nkube-system                          8m36s       Warning   FailedCreate               replicaset/metrics-server-v0.3.6-65d4dc878                       Error creating: pods \"metrics-server-v0.3.6-65d4dc878-\" is forbidden: unable to validate against any pod security policy: []\nkube-system                          8m35s       Normal    SuccessfulCreate           replicaset/metrics-server-v0.3.6-65d4dc878                       Created pod: metrics-server-v0.3.6-65d4dc878-4p6mr\nkube-system                          7m44s       Normal    SuccessfulDelete           replicaset/metrics-server-v0.3.6-65d4dc878                       Deleted pod: metrics-server-v0.3.6-65d4dc878-4p6mr\nkube-system                          8m37s       Normal    ScalingReplicaSet          deployment/metrics-server-v0.3.6                                 Scaled up replica set metrics-server-v0.3.6-65d4dc878 to 1\nkube-system                          7m52s       Normal    ScalingReplicaSet          deployment/metrics-server-v0.3.6                                 Scaled up replica set metrics-server-v0.3.6-5f859c87d6 to 1\nkube-system                          7m45s       Normal    ScalingReplicaSet          deployment/metrics-server-v0.3.6                                 Scaled down replica set metrics-server-v0.3.6-65d4dc878 to 0\nkube-system                          8m18s       Warning   FailedScheduling           pod/volume-snapshot-controller-0                                 0/4 nodes are available: 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m13s       Warning   FailedScheduling           pod/volume-snapshot-controller-0                                 0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m5s        Normal    Scheduled                  pod/volume-snapshot-controller-0                                 Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-6tqd\nkube-system                          8m4s        Normal    Pulling                    pod/volume-snapshot-controller-0                                 Pulling image \"quay.io/k8scsi/snapshot-controller:v2.0.0-rc2\"\nkube-system                          8m1s        Normal    Pulled                     pod/volume-snapshot-controller-0                                 Successfully pulled image \"quay.io/k8scsi/snapshot-controller:v2.0.0-rc2\"\nkube-system                          8m1s        Normal    Created                    pod/volume-snapshot-controller-0                                 Created container volume-snapshot-controller\nkube-system                          8m1s        Normal    Started                    pod/volume-snapshot-controller-0                                 Started container volume-snapshot-controller\nkube-system                          8m26s       Normal    SuccessfulCreate           statefulset/volume-snapshot-controller                           create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful\nkubectl-522                          60s         Normal    Created                    pod/e2e-test-httpd-deployment-594dddd44f-wm77v                   Created container e2e-test-httpd-deployment\nkubectl-522                          60s         Normal    Started                    pod/e2e-test-httpd-deployment-594dddd44f-wm77v                   Started container e2e-test-httpd-deployment\nkubectl-522                          62s         Normal    SuccessfulCreate           replicaset/e2e-test-httpd-deployment-594dddd44f                  Created pod: e2e-test-httpd-deployment-594dddd44f-wm77v\nkubectl-522                          62s         Normal    ScalingReplicaSet          deployment/e2e-test-httpd-deployment                             Scaled up replica set e2e-test-httpd-deployment-594dddd44f to 1\nkubectl-7411                         <unknown>                                                                                                         some data here\nkubectl-7411                         2s          Warning   FailedScheduling           pod/pod1dzwngmfhc7                                               0/5 nodes are available: 1 node(s) were unschedulable, 4 Insufficient cpu.\nkubectl-7411                         1s          Warning   FailedScheduling           pod/pod1dzwngmfhc7                                               skip schedule deleting pod: kubectl-7411/pod1dzwngmfhc7\nkubelet-test-7287                    29s         Normal    Scheduled                  pod/bin-falsee39eff4e-1cc1-4b24-93b1-8081a6c1c366                Successfully assigned kubelet-test-7287/bin-falsee39eff4e-1cc1-4b24-93b1-8081a6c1c366 to bootstrap-e2e-minion-group-zzr9\nkubelet-test-7287                    26s         Normal    Pulled                     pod/bin-falsee39eff4e-1cc1-4b24-93b1-8081a6c1c366                Container image \"docker.io/library/busybox:1.29\" already present on machine\nkubelet-test-7287                    26s         Normal    Created                    pod/bin-falsee39eff4e-1cc1-4b24-93b1-8081a6c1c366                Created container bin-falsee39eff4e-1cc1-4b24-93b1-8081a6c1c366\nkubelet-test-7287                    25s         Normal    Started                    pod/bin-falsee39eff4e-1cc1-4b24-93b1-8081a6c1c366                Started container bin-falsee39eff4e-1cc1-4b24-93b1-8081a6c1c366\nnettest-1552                         4m55s       Normal    Scheduled                  pod/netserver-0                                                  Successfully assigned nettest-1552/netserver-0 to bootstrap-e2e-minion-group-6tqd\nnettest-1552                         4m53s       Normal    Pulled                     pod/netserver-0                                                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-1552                         4m53s       Normal    Created                    pod/netserver-0                                                  Created container webserver\nnettest-1552                         4m53s       Normal    Started                    pod/netserver-0                                                  Started container webserver\nnettest-1552                         3m50s       Normal    Killing                    pod/netserver-0                                                  Stopping container webserver\nnettest-1552                         4m55s       Normal    Scheduled                  pod/netserver-1                                                  Successfully assigned nettest-1552/netserver-1 to bootstrap-e2e-minion-group-d58v\nnettest-1552                         4m48s       Normal    Pulled                     pod/netserver-1                                                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-1552                         4m47s       Normal    Created                    pod/netserver-1                                                  Created container webserver\nnettest-1552                         4m46s       Normal    Started                    pod/netserver-1                                                  Started container webserver\nnettest-1552                         4m54s       Normal    Scheduled                  pod/netserver-2                                                  Successfully assigned nettest-1552/netserver-2 to bootstrap-e2e-minion-group-w9fq\nnettest-1552                         4m51s       Normal    Pulled                     pod/netserver-2                                                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-1552                         4m50s       Normal    Created                    pod/netserver-2                                                  Created container webserver\nnettest-1552                         4m49s       Normal    Started                    pod/netserver-2                                                  Started container webserver\nnettest-1552                         4m54s       Normal    Scheduled                  pod/netserver-3                                                  Successfully assigned nettest-1552/netserver-3 to bootstrap-e2e-minion-group-zzr9\nnettest-1552                         4m52s       Normal    Pulled                     pod/netserver-3                                                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-1552                         4m52s       Normal    Created                    pod/netserver-3                                                  Created container webserver\nnettest-1552                         4m52s       Normal    Started                    pod/netserver-3                                                  Started container webserver\nnettest-1552                         4m31s       Normal    Scheduled                  pod/test-container-pod                                           Successfully assigned nettest-1552/test-container-pod to bootstrap-e2e-minion-group-zzr9\nnettest-1552                         4m28s       Normal    Pulled                     pod/test-container-pod                                           Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-1552                         4m28s       Normal    Created                    pod/test-container-pod                                           Created container webserver\nnettest-1552                         4m28s       Normal    Started                    pod/test-container-pod                                           Started container webserver\nnettest-2146                         5m11s       Normal    Scheduled                  pod/netserver-0                                                  Successfully assigned nettest-2146/netserver-0 to bootstrap-e2e-minion-group-6tqd\nnettest-2146                         5m9s        Normal    Pulling                    pod/netserver-0                                                  Pulling image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nnettest-2146                         5m7s        Normal    Pulled                     pod/netserver-0                                                  Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nnettest-2146                         5m7s        Normal    Created                    pod/netserver-0                                                  Created container webserver\nnettest-2146                         5m6s        Normal    Started                    pod/netserver-0                                                  Started container webserver\nnettest-2146                         3m14s       Normal    Killing                    pod/netserver-0                                                  Stopping container webserver\nnettest-2146                         5m11s       Normal    Scheduled                  pod/netserver-1                                                  Successfully assigned nettest-2146/netserver-1 to bootstrap-e2e-minion-group-d58v\nnettest-2146                         5m8s        Normal    Pulling                    pod/netserver-1                                                  Pulling image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nnettest-2146                         5m7s        Normal    Pulled                     pod/netserver-1                                                  Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nnettest-2146                         5m5s        Normal    Created                    pod/netserver-1                                                  Created container webserver\nnettest-2146                         5m4s        Normal    Started                    pod/netserver-1                                                  Started container webserver\nnettest-2146                         5m11s       Normal    Scheduled                  pod/netserver-2                                                  Successfully assigned nettest-2146/netserver-2 to bootstrap-e2e-minion-group-w9fq\nnettest-2146                         5m8s        Normal    Pulling                    pod/netserver-2                                                  Pulling image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nnettest-2146                         5m4s        Normal    Pulled                     pod/netserver-2                                                  Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nnettest-2146                         5m4s        Normal    Created                    pod/netserver-2                                                  Created container webserver\nnettest-2146                         5m3s        Normal    Started                    pod/netserver-2                                                  Started container webserver\nnettest-2146                         5m11s       Normal    Scheduled                  pod/netserver-3                                                  Successfully assigned nettest-2146/netserver-3 to bootstrap-e2e-minion-group-zzr9\nnettest-2146                         5m7s        Normal    Pulled                     pod/netserver-3                                                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-2146                         5m7s        Normal    Created                    pod/netserver-3                                                  Created container webserver\nnettest-2146                         5m5s        Normal    Started                    pod/netserver-3                                                  Started container webserver\nnettest-2146                         4m45s       Normal    Scheduled                  pod/test-container-pod                                           Successfully assigned nettest-2146/test-container-pod to bootstrap-e2e-minion-group-6tqd\nnettest-2146                         4m39s       Normal    Pulled                     pod/test-container-pod                                           Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-2146                         4m39s       Normal    Created                    pod/test-container-pod                                           Created container webserver\nnettest-2146                         4m38s       Normal    Started                    pod/test-container-pod                                           Started container webserver\npersistent-local-volumes-test-2314   9s          Normal    Pulled                     pod/hostexec-bootstrap-e2e-minion-group-6tqd-qgwk2               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-2314   9s          Normal    Created                    pod/hostexec-bootstrap-e2e-minion-group-6tqd-qgwk2               Created container agnhost\npersistent-local-volumes-test-2314   8s          Normal    Started                    pod/hostexec-bootstrap-e2e-minion-group-6tqd-qgwk2               Started container agnhost\npersistent-local-volumes-test-5039   110s        Warning   FailedMount                pod/hostexec-bootstrap-e2e-minion-group-6tqd-vs4dp               MountVolume.SetUp failed for volume \"default-token-zhlsr\" : failed to sync secret cache: timed out waiting for the condition\npersistent-local-volumes-test-5039   109s        Normal    Pulled                     pod/hostexec-bootstrap-e2e-minion-group-6tqd-vs4dp               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-5039   108s        Normal    Created                    pod/hostexec-bootstrap-e2e-minion-group-6tqd-vs4dp               Created container agnhost\npersistent-local-volumes-test-5039   108s        Normal    Started                    pod/hostexec-bootstrap-e2e-minion-group-6tqd-vs4dp               Started container agnhost\npersistent-local-volumes-test-5039   69s         Normal    Scheduled                  pod/security-context-9a86c26c-bc80-4e5b-a7ef-700795995e16        Successfully assigned persistent-local-volumes-test-5039/security-context-9a86c26c-bc80-4e5b-a7ef-700795995e16 to bootstrap-e2e-minion-group-6tqd\npersistent-local-volumes-test-5039   68s         Normal    SuccessfulMountVolume      pod/security-context-9a86c26c-bc80-4e5b-a7ef-700795995e16        MapVolume.MapPodDevice succeeded for volume \"local-pvwxpdt\" globalMapPath \"/var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/local-pvwxpdt\"\npersistent-local-volumes-test-5039   68s         Normal    SuccessfulMountVolume      pod/security-context-9a86c26c-bc80-4e5b-a7ef-700795995e16        MapVolume.MapPodDevice succeeded for volume \"local-pvwxpdt\" volumeMapPath \"/var/lib/kubelet/pods/4f8ec572-5c4e-45ea-8d7a-c5c98b0c2827/volumeDevices/kubernetes.io~local-volume\"\npersistent-local-volumes-test-5039   66s         Normal    Pulled                     pod/security-context-9a86c26c-bc80-4e5b-a7ef-700795995e16        Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-5039   65s         Normal    Created                    pod/security-context-9a86c26c-bc80-4e5b-a7ef-700795995e16        Created container write-pod\npersistent-local-volumes-test-5039   64s         Normal    Started                    pod/security-context-9a86c26c-bc80-4e5b-a7ef-700795995e16        Started container write-pod\npersistent-local-volumes-test-5039   60s         Normal    Killing                    pod/security-context-9a86c26c-bc80-4e5b-a7ef-700795995e16        Stopping container write-pod\npersistent-local-volumes-test-5039   87s         Normal    Scheduled                  pod/security-context-f2e5da0e-c6a8-4468-a9fa-204175b3a618        Successfully assigned persistent-local-volumes-test-5039/security-context-f2e5da0e-c6a8-4468-a9fa-204175b3a618 to bootstrap-e2e-minion-group-6tqd\npersistent-local-volumes-test-5039   86s         Normal    SuccessfulMountVolume      pod/security-context-f2e5da0e-c6a8-4468-a9fa-204175b3a618        MapVolume.MapPodDevice succeeded for volume \"local-pvwxpdt\" globalMapPath \"/var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/local-pvwxpdt\"\npersistent-local-volumes-test-5039   86s         Normal    SuccessfulMountVolume      pod/security-context-f2e5da0e-c6a8-4468-a9fa-204175b3a618        MapVolume.MapPodDevice succeeded for volume \"local-pvwxpdt\" volumeMapPath \"/var/lib/kubelet/pods/68ac637b-32bf-4c46-9e37-f3dbecbd808b/volumeDevices/kubernetes.io~local-volume\"\npersistent-local-volumes-test-5039   82s         Normal    Pulled                     pod/security-context-f2e5da0e-c6a8-4468-a9fa-204175b3a618        Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-5039   82s         Normal    Created                    pod/security-context-f2e5da0e-c6a8-4468-a9fa-204175b3a618        Created container write-pod\npersistent-local-volumes-test-5039   81s         Normal    Started                    pod/security-context-f2e5da0e-c6a8-4468-a9fa-204175b3a618        Started container write-pod\npersistent-local-volumes-test-5039   69s         Normal    Killing                    pod/security-context-f2e5da0e-c6a8-4468-a9fa-204175b3a618        Stopping container write-pod\npersistent-local-volumes-test-8906   20s         Normal    Pulled                     pod/hostexec-bootstrap-e2e-minion-group-6tqd-4x72w               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-8906   20s         Normal    Created                    pod/hostexec-bootstrap-e2e-minion-group-6tqd-4x72w               Created container agnhost\npersistent-local-volumes-test-8906   17s         Normal    Started                    pod/hostexec-bootstrap-e2e-minion-group-6tqd-4x72w               Started container agnhost\npersistent-local-volumes-test-8906   9s          Normal    Scheduled                  pod/security-context-7e20886c-66a5-4655-8255-5fcd164360f6        Successfully assigned persistent-local-volumes-test-8906/security-context-7e20886c-66a5-4655-8255-5fcd164360f6 to bootstrap-e2e-minion-group-6tqd\npersistent-local-volumes-test-8906   6s          Normal    Pulled                     pod/security-context-7e20886c-66a5-4655-8255-5fcd164360f6        Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-8906   6s          Normal    Created                    pod/security-context-7e20886c-66a5-4655-8255-5fcd164360f6        Created container write-pod\npersistent-local-volumes-test-8906   5s          Normal    Started                    pod/security-context-7e20886c-66a5-4655-8255-5fcd164360f6        Started container write-pod\npods-5144                            57s         Normal    Scheduled                  pod/pod-ready                                                    Successfully assigned pods-5144/pod-ready to bootstrap-e2e-minion-group-6tqd\npods-5144                            56s         Warning   FailedMount                pod/pod-ready                                                    MountVolume.SetUp failed for volume \"default-token-nf67l\" : failed to sync secret cache: timed out waiting for the condition\npods-5144                            53s         Normal    Pulled                     pod/pod-ready                                                    Container image \"docker.io/library/busybox:1.29\" already present on machine\npods-5144                            53s         Normal    Created                    pod/pod-ready                                                    Created container pod-readiness-gate\npods-5144                            52s         Normal    Started                    pod/pod-ready                                                    Started container pod-readiness-gate\npodsecuritypolicy-4050               66s         Normal    Scheduled                  pod/allowed                                                      Successfully assigned podsecuritypolicy-4050/allowed to bootstrap-e2e-minion-group-w9fq\npodsecuritypolicy-4050               64s         Normal    Pulled                     pod/allowed                                                      Container image \"k8s.gcr.io/pause:3.1\" already present on machine\npodsecuritypolicy-4050               64s         Normal    Created                    pod/allowed                                                      Created container pause\npodsecuritypolicy-4050               64s         Normal    Started                    pod/allowed                                                      Started container pause\npodsecuritypolicy-607                50s         Normal    Scheduled                  pod/apparmor                                                     Successfully assigned podsecuritypolicy-607/apparmor to bootstrap-e2e-minion-group-zzr9\npodsecuritypolicy-607                47s         Normal    Pulled                     pod/apparmor                                                     Container image \"k8s.gcr.io/pause:3.1\" already present on machine\npodsecuritypolicy-607                47s         Normal    Created                    pod/apparmor                                                     Created container pause\npodsecuritypolicy-607                46s         Normal    Started                    pod/apparmor                                                     Started container pause\npodsecuritypolicy-607                59s         Normal    Scheduled                  pod/hostipc                                                      Successfully assigned podsecuritypolicy-607/hostipc to bootstrap-e2e-minion-group-w9fq\npodsecuritypolicy-607                57s         Normal    Pulled                     pod/hostipc                                                      Container image \"k8s.gcr.io/pause:3.1\" already present on machine\npodsecuritypolicy-607                56s         Normal    Created                    pod/hostipc                                                      Created container pause\npodsecuritypolicy-607                56s         Normal    Started                    pod/hostipc                                                      Started container pause\npodsecuritypolicy-607                71s         Normal    Scheduled                  pod/hostnet                                                      Successfully assigned podsecuritypolicy-607/hostnet to bootstrap-e2e-minion-group-6tqd\npodsecuritypolicy-607                69s         Normal    Pulled                     pod/hostnet                                                      Container image \"k8s.gcr.io/pause:3.1\" already present on machine\npodsecuritypolicy-607                69s         Normal    Created                    pod/hostnet                                                      Created container pause\npodsecuritypolicy-607                68s         Normal    Started                    pod/hostnet                                                      Started container pause\npodsecuritypolicy-607                85s         Normal    Scheduled                  pod/hostpath                                                     Successfully assigned podsecuritypolicy-607/hostpath to bootstrap-e2e-minion-group-6tqd\npodsecuritypolicy-607                83s         Warning   FailedMount                pod/hostpath                                                     MountVolume.SetUp failed for volume \"default-token-ls44q\" : failed to sync secret cache: timed out waiting for the condition\npodsecuritypolicy-607                80s         Normal    Pulled                     pod/hostpath                                                     Container image \"k8s.gcr.io/pause:3.1\" already present on machine\npodsecuritypolicy-607                80s         Normal    Created                    pod/hostpath                                                     Created container pause\npodsecuritypolicy-607                79s         Normal    Started                    pod/hostpath                                                     Started container pause\npodsecuritypolicy-607                65s         Normal    Scheduled                  pod/hostpid                                                      Successfully assigned podsecuritypolicy-607/hostpid to bootstrap-e2e-minion-group-w9fq\npodsecuritypolicy-607                64s         Normal    Pulled                     pod/hostpid                                                      Container image \"k8s.gcr.io/pause:3.1\" already present on machine\npodsecuritypolicy-607                64s         Normal    Created                    pod/hostpid                                                      Created container pause\npodsecuritypolicy-607                63s         Normal    Started                    pod/hostpid                                                      Started container pause\npodsecuritypolicy-607                97s         Normal    Scheduled                  pod/privileged                                                   Successfully assigned podsecuritypolicy-607/privileged to bootstrap-e2e-minion-group-w9fq\npodsecuritypolicy-607                94s         Normal    Pulled                     pod/privileged                                                   Container image \"k8s.gcr.io/pause:3.1\" already present on machine\npodsecuritypolicy-607                94s         Normal    Created                    pod/privileged                                                   Created container pause\npodsecuritypolicy-607                94s         Normal    Started                    pod/privileged                                                   Started container pause\npodsecuritypolicy-607                16s         Normal    Scheduled                  pod/runasgroup                                                   Successfully assigned podsecuritypolicy-607/runasgroup to bootstrap-e2e-minion-group-zzr9\npodsecuritypolicy-607                11s         Normal    Pulled                     pod/runasgroup                                                   Container image \"k8s.gcr.io/pause:3.1\" already present on machine\npodsecuritypolicy-607                11s         Normal    Created                    pod/runasgroup                                                   Created container pause\npodsecuritypolicy-607                9s          Normal    Started                    pod/runasgroup                                                   Started container pause\npodsecuritypolicy-607                39s         Normal    Scheduled                  pod/seccomp                                                      Successfully assigned podsecuritypolicy-607/seccomp to bootstrap-e2e-minion-group-zzr9\npodsecuritypolicy-607                38s         Normal    Pulled                     pod/seccomp                                                      Container image \"k8s.gcr.io/pause:3.1\" already present on machine\npodsecuritypolicy-607                38s         Normal    Created                    pod/seccomp                                                      Created container pause\npodsecuritypolicy-607                37s         Normal    Started                    pod/seccomp                                                      Started container pause\npodsecuritypolicy-607                31s         Normal    Scheduled                  pod/sysadmin                                                     Successfully assigned podsecuritypolicy-607/sysadmin to bootstrap-e2e-minion-group-zzr9\npodsecuritypolicy-607                27s         Normal    Pulled                     pod/sysadmin                                                     Container image \"k8s.gcr.io/pause:3.1\" already present on machine\npodsecuritypolicy-607                27s         Normal    Created                    pod/sysadmin                                                     Created container pause\npodsecuritypolicy-607                25s         Normal    Started                    pod/sysadmin                                                     Started container pause\nprojected-1000                       11s         Normal    Scheduled                  pod/downwardapi-volume-4574ef3f-531f-4259-b321-0d6ec6db641f      Successfully assigned projected-1000/downwardapi-volume-4574ef3f-531f-4259-b321-0d6ec6db641f to bootstrap-e2e-minion-group-zzr9\nprojected-1000                       8s          Normal    Pulled                     pod/downwardapi-volume-4574ef3f-531f-4259-b321-0d6ec6db641f      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprojected-1000                       8s          Normal    Created                    pod/downwardapi-volume-4574ef3f-531f-4259-b321-0d6ec6db641f      Created container client-container\nprojected-1000                       7s          Normal    Started                    pod/downwardapi-volume-4574ef3f-531f-4259-b321-0d6ec6db641f      Started container client-container\nprovisioning-1455                    100s        Warning   FailedMount                pod/csi-hostpath-attacher-0                                      MountVolume.SetUp failed for volume \"csi-attacher-token-kswr6\" : failed to sync secret cache: timed out waiting for the condition\nprovisioning-1455                    97s         Normal    Pulled                     pod/csi-hostpath-attacher-0                                      Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\nprovisioning-1455                    97s         Normal    Created                    pod/csi-hostpath-attacher-0                                      Created container csi-attacher\nprovisioning-1455                    96s         Normal    Started                    pod/csi-hostpath-attacher-0                                      Started container csi-attacher\nprovisioning-1455                    38s         Normal    Killing                    pod/csi-hostpath-attacher-0                                      Stopping container csi-attacher\nprovisioning-1455                    104s        Warning   FailedCreate               statefulset/csi-hostpath-attacher                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-1455                    101s        Normal    SuccessfulCreate           statefulset/csi-hostpath-attacher                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nprovisioning-1455                    99s         Warning   FailedMount                pod/csi-hostpath-provisioner-0                                   MountVolume.SetUp failed for volume \"csi-provisioner-token-t4nnb\" : failed to sync secret cache: timed out waiting for the condition\nprovisioning-1455                    97s         Normal    Pulled                     pod/csi-hostpath-provisioner-0                                   Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nprovisioning-1455                    97s         Normal    Created                    pod/csi-hostpath-provisioner-0                                   Created container csi-provisioner\nprovisioning-1455                    96s         Normal    Started                    pod/csi-hostpath-provisioner-0                                   Started container csi-provisioner\nprovisioning-1455                    36s         Normal    Killing                    pod/csi-hostpath-provisioner-0                                   Stopping container csi-provisioner\nprovisioning-1455                    102s        Warning   FailedCreate               statefulset/csi-hostpath-provisioner                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-1455                    101s        Normal    SuccessfulCreate           statefulset/csi-hostpath-provisioner                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nprovisioning-1455                    100s        Warning   FailedMount                pod/csi-hostpath-resizer-0                                       MountVolume.SetUp failed for volume \"csi-resizer-token-6gvj9\" : failed to sync secret cache: timed out waiting for the condition\nprovisioning-1455                    98s         Normal    Pulling                    pod/csi-hostpath-resizer-0                                       Pulling image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\nprovisioning-1455                    94s         Normal    Pulled                     pod/csi-hostpath-resizer-0                                       Successfully pulled image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\nprovisioning-1455                    93s         Normal    Created                    pod/csi-hostpath-resizer-0                                       Created container csi-resizer\nprovisioning-1455                    93s         Normal    Started                    pod/csi-hostpath-resizer-0                                       Started container csi-resizer\nprovisioning-1455                    33s         Normal    Killing                    pod/csi-hostpath-resizer-0                                       Stopping container csi-resizer\nprovisioning-1455                    102s        Warning   FailedCreate               statefulset/csi-hostpath-resizer                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-1455                    101s        Normal    SuccessfulCreate           statefulset/csi-hostpath-resizer                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nprovisioning-1455                    101s        Normal    ExternalProvisioning       persistentvolumeclaim/csi-hostpath5zzcn                          waiting for a volume to be created, either by external provisioner \"csi-hostpath-provisioning-1455\" or manually created by system administrator\nprovisioning-1455                    96s         Normal    Provisioning               persistentvolumeclaim/csi-hostpath5zzcn                          External provisioner is provisioning volume for claim \"provisioning-1455/csi-hostpath5zzcn\"\nprovisioning-1455                    95s         Normal    ProvisioningSucceeded      persistentvolumeclaim/csi-hostpath5zzcn                          Successfully provisioned volume pvc-89e2d0b3-aaf1-4d0e-b47b-689433039768\nprovisioning-1455                    105s        Warning   FailedMount                pod/csi-hostpathplugin-0                                         MountVolume.SetUp failed for volume \"default-token-vkdrw\" : failed to sync secret cache: timed out waiting for the condition\nprovisioning-1455                    104s        Normal    Pulled                     pod/csi-hostpathplugin-0                                         Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nprovisioning-1455                    104s        Normal    Created                    pod/csi-hostpathplugin-0                                         Created container node-driver-registrar\nprovisioning-1455                    104s        Normal    Started                    pod/csi-hostpathplugin-0                                         Started container node-driver-registrar\nprovisioning-1455                    104s        Normal    Pulling                    pod/csi-hostpathplugin-0                                         Pulling image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nprovisioning-1455                    100s        Normal    Pulled                     pod/csi-hostpathplugin-0                                         Successfully pulled image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nprovisioning-1455                    99s         Normal    Created                    pod/csi-hostpathplugin-0                                         Created container hostpath\nprovisioning-1455                    99s         Normal    Started                    pod/csi-hostpathplugin-0                                         Started container hostpath\nprovisioning-1455                    99s         Normal    Pulling                    pod/csi-hostpathplugin-0                                         Pulling image \"quay.io/k8scsi/livenessprobe:v1.1.0\"\nprovisioning-1455                    96s         Normal    Pulled                     pod/csi-hostpathplugin-0                                         Successfully pulled image \"quay.io/k8scsi/livenessprobe:v1.1.0\"\nprovisioning-1455                    95s         Normal    Created                    pod/csi-hostpathplugin-0                                         Created container liveness-probe\nprovisioning-1455                    94s         Normal    Started                    pod/csi-hostpathplugin-0                                         Started container liveness-probe\nprovisioning-1455                    37s         Normal    Killing                    pod/csi-hostpathplugin-0                                         Stopping container node-driver-registrar\nprovisioning-1455                    37s         Normal    Killing                    pod/csi-hostpathplugin-0                                         Stopping container liveness-probe\nprovisioning-1455                    37s         Normal    Killing                    pod/csi-hostpathplugin-0                                         Stopping container hostpath\nprovisioning-1455                    36s         Warning   FailedPreStopHook          pod/csi-hostpathplugin-0                                         Exec lifecycle hook ([/bin/sh -c rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock]) for Container \"node-driver-registrar\" in Pod \"csi-hostpathplugin-0_provisioning-1455(efcfd97f-ff80-44b1-8a8c-e560dc299a0c)\" failed - error: command '/bin/sh -c rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock' exited with 126: , message: \"OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused \\\"exec: \\\\\\\"/bin/sh\\\\\\\": stat /bin/sh: no such file or directory\\\": unknown\\r\\n\"\nprovisioning-1455                    36s         Warning   Unhealthy                  pod/csi-hostpathplugin-0                                         Liveness probe failed: Get http://10.64.2.65:9898/healthz: dial tcp 10.64.2.65:9898: connect: connection refused\nprovisioning-1455                    106s        Normal    SuccessfulCreate           statefulset/csi-hostpathplugin                                   create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nprovisioning-1455                    100s        Normal    Pulling                    pod/csi-snapshotter-0                                            Pulling image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\"\nprovisioning-1455                    95s         Normal    Pulled                     pod/csi-snapshotter-0                                            Successfully pulled image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\"\nprovisioning-1455                    32s         Normal    Created                    pod/csi-snapshotter-0                                            Created container csi-snapshotter\nprovisioning-1455                    30s         Normal    Started                    pod/csi-snapshotter-0                                            Started container csi-snapshotter\nprovisioning-1455                    33s         Normal    Pulled                     pod/csi-snapshotter-0                                            Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nprovisioning-1455                    31s         Warning   FailedMount                pod/csi-snapshotter-0                                            MountVolume.SetUp failed for volume \"csi-snapshotter-token-gks5w\" : secret \"csi-snapshotter-token-gks5w\" not found\nprovisioning-1455                    102s        Warning   FailedCreate               statefulset/csi-snapshotter                                      create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter failed error: pods \"csi-snapshotter-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-1455                    102s        Normal    SuccessfulCreate           statefulset/csi-snapshotter                                      create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nprovisioning-1455                    90s         Normal    SuccessfulAttachVolume     pod/pod-subpath-test-dynamicpv-jxhk                              AttachVolume.Attach succeeded for volume \"pvc-89e2d0b3-aaf1-4d0e-b47b-689433039768\"\nprovisioning-1455                    80s         Normal    Pulled                     pod/pod-subpath-test-dynamicpv-jxhk                              Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-1455                    80s         Normal    Created                    pod/pod-subpath-test-dynamicpv-jxhk                              Created container test-container-subpath-dynamicpv-jxhk\nprovisioning-1455                    80s         Normal    Started                    pod/pod-subpath-test-dynamicpv-jxhk                              Started container test-container-subpath-dynamicpv-jxhk\nprovisioning-1455                    80s         Normal    Pulled                     pod/pod-subpath-test-dynamicpv-jxhk                              Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-1455                    80s         Normal    Created                    pod/pod-subpath-test-dynamicpv-jxhk                              Created container test-container-volume-dynamicpv-jxhk\nprovisioning-1455                    79s         Normal    Started                    pod/pod-subpath-test-dynamicpv-jxhk                              Started container test-container-volume-dynamicpv-jxhk\nprovisioning-1455                    73s         Normal    Killing                    pod/pod-subpath-test-dynamicpv-jxhk                              Stopping container test-container-volume-dynamicpv-jxhk\nprovisioning-2093                    60s         Normal    Scheduled                  pod/gluster-server                                               Successfully assigned provisioning-2093/gluster-server to bootstrap-e2e-minion-group-w9fq\nprovisioning-2093                    58s         Normal    Pulled                     pod/gluster-server                                               Container image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\" already present on machine\nprovisioning-2093                    57s         Normal    Created                    pod/gluster-server                                               Created container gluster-server\nprovisioning-2093                    57s         Normal    Started                    pod/gluster-server                                               Started container gluster-server\nprovisioning-2093                    13s         Normal    Killing                    pod/gluster-server                                               Stopping container gluster-server\nprovisioning-2093                    51s         Normal    Scheduled                  pod/pod-subpath-test-inlinevolume-zlm9                           Successfully assigned provisioning-2093/pod-subpath-test-inlinevolume-zlm9 to bootstrap-e2e-minion-group-w9fq\nprovisioning-2093                    49s         Normal    Pulled                     pod/pod-subpath-test-inlinevolume-zlm9                           Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-2093                    49s         Normal    Created                    pod/pod-subpath-test-inlinevolume-zlm9                           Created container init-volume-inlinevolume-zlm9\nprovisioning-2093                    48s         Normal    Started                    pod/pod-subpath-test-inlinevolume-zlm9                           Started container init-volume-inlinevolume-zlm9\nprovisioning-2093                    47s         Normal    Pulled                     pod/pod-subpath-test-inlinevolume-zlm9                           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-2093                    47s         Normal    Created                    pod/pod-subpath-test-inlinevolume-zlm9                           Created container test-init-subpath-inlinevolume-zlm9\nprovisioning-2093                    46s         Normal    Started                    pod/pod-subpath-test-inlinevolume-zlm9                           Started container test-init-subpath-inlinevolume-zlm9\nprovisioning-2093                    45s         Normal    Pulled                     pod/pod-subpath-test-inlinevolume-zlm9                           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-2093                    44s         Normal    Created                    pod/pod-subpath-test-inlinevolume-zlm9                           Created container test-container-subpath-inlinevolume-zlm9\nprovisioning-2093                    44s         Normal    Started                    pod/pod-subpath-test-inlinevolume-zlm9                           Started container test-container-subpath-inlinevolume-zlm9\nprovisioning-2093                    34s         Normal    Scheduled                  pod/pod-subpath-test-inlinevolume-zlm9                           Successfully assigned provisioning-2093/pod-subpath-test-inlinevolume-zlm9 to bootstrap-e2e-minion-group-zzr9\nprovisioning-2093                    30s         Normal    Pulled                     pod/pod-subpath-test-inlinevolume-zlm9                           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-2093                    29s         Normal    Created                    pod/pod-subpath-test-inlinevolume-zlm9                           Created container test-container-subpath-inlinevolume-zlm9\nprovisioning-2093                    28s         Normal    Started                    pod/pod-subpath-test-inlinevolume-zlm9                           Started container test-container-subpath-inlinevolume-zlm9\nprovisioning-2251                    63s         Normal    Pulled                     pod/csi-hostpath-attacher-0                                      Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\nprovisioning-2251                    63s         Normal    Created                    pod/csi-hostpath-attacher-0                                      Created container csi-attacher\nprovisioning-2251                    62s         Normal    Started                    pod/csi-hostpath-attacher-0                                      Started container csi-attacher\nprovisioning-2251                    15s         Normal    Killing                    pod/csi-hostpath-attacher-0                                      Stopping container csi-attacher\nprovisioning-2251                    70s         Warning   FailedCreate               statefulset/csi-hostpath-attacher                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-2251                    67s         Normal    SuccessfulCreate           statefulset/csi-hostpath-attacher                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nprovisioning-2251                    65s         Normal    Pulled                     pod/csi-hostpath-provisioner-0                                   Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nprovisioning-2251                    65s         Normal    Created                    pod/csi-hostpath-provisioner-0                                   Created container csi-provisioner\nprovisioning-2251                    63s         Normal    Started                    pod/csi-hostpath-provisioner-0                                   Started container csi-provisioner\nprovisioning-2251                    15s         Normal    Killing                    pod/csi-hostpath-provisioner-0                                   Stopping container csi-provisioner\nprovisioning-2251                    70s         Warning   FailedCreate               statefulset/csi-hostpath-provisioner                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-2251                    69s         Normal    SuccessfulCreate           statefulset/csi-hostpath-provisioner                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nprovisioning-2251                    63s         Normal    Pulled                     pod/csi-hostpath-resizer-0                                       Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\nprovisioning-2251                    63s         Normal    Created                    pod/csi-hostpath-resizer-0                                       Created container csi-resizer\nprovisioning-2251                    62s         Normal    Started                    pod/csi-hostpath-resizer-0                                       Started container csi-resizer\nprovisioning-2251                    14s         Normal    Killing                    pod/csi-hostpath-resizer-0                                       Stopping container csi-resizer\nprovisioning-2251                    70s         Warning   FailedCreate               statefulset/csi-hostpath-resizer                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-2251                    69s         Normal    SuccessfulCreate           statefulset/csi-hostpath-resizer                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nprovisioning-2251                    69s         Normal    ExternalProvisioning       persistentvolumeclaim/csi-hostpathp4gnh                          waiting for a volume to be created, either by external provisioner \"csi-hostpath-provisioning-2251\" or manually created by system administrator\nprovisioning-2251                    62s         Normal    Provisioning               persistentvolumeclaim/csi-hostpathp4gnh                          External provisioner is provisioning volume for claim \"provisioning-2251/csi-hostpathp4gnh\"\nprovisioning-2251                    62s         Normal    ProvisioningSucceeded      persistentvolumeclaim/csi-hostpathp4gnh                          Successfully provisioned volume pvc-41ca79e8-996e-45e9-87d9-976e8b009fc8\nprovisioning-2251                    68s         Normal    Pulled                     pod/csi-hostpathplugin-0                                         Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nprovisioning-2251                    67s         Normal    Created                    pod/csi-hostpathplugin-0                                         Created container node-driver-registrar\nprovisioning-2251                    66s         Normal    Started                    pod/csi-hostpathplugin-0                                         Started container node-driver-registrar\nprovisioning-2251                    66s         Normal    Pulled                     pod/csi-hostpathplugin-0                                         Container image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\" already present on machine\nprovisioning-2251                    66s         Normal    Created                    pod/csi-hostpathplugin-0                                         Created container hostpath\nprovisioning-2251                    63s         Normal    Started                    pod/csi-hostpathplugin-0                                         Started container hostpath\nprovisioning-2251                    63s         Normal    Pulled                     pod/csi-hostpathplugin-0                                         Container image \"quay.io/k8scsi/livenessprobe:v1.1.0\" already present on machine\nprovisioning-2251                    63s         Normal    Created                    pod/csi-hostpathplugin-0                                         Created container liveness-probe\nprovisioning-2251                    63s         Normal    Started                    pod/csi-hostpathplugin-0                                         Started container liveness-probe\nprovisioning-2251                    15s         Normal    Killing                    pod/csi-hostpathplugin-0                                         Stopping container node-driver-registrar\nprovisioning-2251                    15s         Normal    Killing                    pod/csi-hostpathplugin-0                                         Stopping container liveness-probe\nprovisioning-2251                    15s         Normal    Killing                    pod/csi-hostpathplugin-0                                         Stopping container hostpath\nprovisioning-2251                    14s         Warning   Unhealthy                  pod/csi-hostpathplugin-0                                         Liveness probe failed: Get http://10.64.0.63:9898/healthz: dial tcp 10.64.0.63:9898: connect: connection refused\nprovisioning-2251                    13s         Warning   FailedPreStopHook          pod/csi-hostpathplugin-0                                         Exec lifecycle hook ([/bin/sh -c rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock]) for Container \"node-driver-registrar\" in Pod \"csi-hostpathplugin-0_provisioning-2251(b98e8e21-974c-4fe8-bca0-2e7709ebdb9f)\" failed - error: command '/bin/sh -c rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock' exited with 126: , message: \"OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused \\\"exec: \\\\\\\"/bin/sh\\\\\\\": stat /bin/sh: no such file or directory\\\": unknown\\r\\n\"\nprovisioning-2251                    72s         Normal    SuccessfulCreate           statefulset/csi-hostpathplugin                                   create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nprovisioning-2251                    13s         Normal    Pulled                     pod/csi-snapshotter-0                                            Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nprovisioning-2251                    12s         Normal    Created                    pod/csi-snapshotter-0                                            Created container csi-snapshotter\nprovisioning-2251                    63s         Normal    Started                    pod/csi-snapshotter-0                                            Started container csi-snapshotter\nprovisioning-2251                    12s         Warning   FailedMount                pod/csi-snapshotter-0                                            MountVolume.SetUp failed for volume \"csi-snapshotter-token-6zb4z\" : secret \"csi-snapshotter-token-6zb4z\" not found\nprovisioning-2251                    70s         Warning   FailedCreate               statefulset/csi-snapshotter                                      create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter failed error: pods \"csi-snapshotter-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-2251                    69s         Normal    SuccessfulCreate           statefulset/csi-snapshotter                                      create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nprovisioning-2251                    59s         Normal    SuccessfulAttachVolume     pod/pod-subpath-test-dynamicpv-fltl                              AttachVolume.Attach succeeded for volume \"pvc-41ca79e8-996e-45e9-87d9-976e8b009fc8\"\nprovisioning-2251                    46s         Normal    Pulled                     pod/pod-subpath-test-dynamicpv-fltl                              Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-2251                    46s         Normal    Created                    pod/pod-subpath-test-dynamicpv-fltl                              Created container init-volume-dynamicpv-fltl\nprovisioning-2251                    45s         Normal    Started                    pod/pod-subpath-test-dynamicpv-fltl                              Started container init-volume-dynamicpv-fltl\nprovisioning-2251                    45s         Normal    Pulled                     pod/pod-subpath-test-dynamicpv-fltl                              Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-2251                    45s         Normal    Created                    pod/pod-subpath-test-dynamicpv-fltl                              Created container test-init-subpath-dynamicpv-fltl\nprovisioning-2251                    45s         Normal    Started                    pod/pod-subpath-test-dynamicpv-fltl                              Started container test-init-subpath-dynamicpv-fltl\nprovisioning-2251                    44s         Normal    Pulled                     pod/pod-subpath-test-dynamicpv-fltl                              Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-2251                    44s         Normal    Created                    pod/pod-subpath-test-dynamicpv-fltl                              Created container test-container-subpath-dynamicpv-fltl\nprovisioning-2251                    43s         Normal    Started                    pod/pod-subpath-test-dynamicpv-fltl                              Started container test-container-subpath-dynamicpv-fltl\nprovisioning-2251                    43s         Normal    Pulled                     pod/pod-subpath-test-dynamicpv-fltl                              Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-2251                    43s         Normal    Created                    pod/pod-subpath-test-dynamicpv-fltl                              Created container test-container-volume-dynamicpv-fltl\nprovisioning-2251                    43s         Normal    Started                    pod/pod-subpath-test-dynamicpv-fltl                              Started container test-container-volume-dynamicpv-fltl\nprovisioning-3380                    110s        Normal    Started                    pod/pod-subpath-test-preprovisionedpv-2q72                       Started container test-init-subpath-preprovisionedpv-2q72\nprovisioning-3380                    110s        Normal    Pulled                     pod/pod-subpath-test-preprovisionedpv-2q72                       Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-3380                    109s        Normal    Created                    pod/pod-subpath-test-preprovisionedpv-2q72                       Created container test-container-subpath-preprovisionedpv-2q72\nprovisioning-3380                    109s        Normal    Started                    pod/pod-subpath-test-preprovisionedpv-2q72                       Started container test-container-subpath-preprovisionedpv-2q72\nprovisioning-3380                    94s         Normal    Scheduled                  pod/pod-subpath-test-preprovisionedpv-2q72                       Successfully assigned provisioning-3380/pod-subpath-test-preprovisionedpv-2q72 to bootstrap-e2e-minion-group-w9fq\nprovisioning-3380                    91s         Normal    Pulled                     pod/pod-subpath-test-preprovisionedpv-2q72                       Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-3380                    91s         Normal    Created                    pod/pod-subpath-test-preprovisionedpv-2q72                       Created container test-container-subpath-preprovisionedpv-2q72\nprovisioning-3380                    91s         Normal    Started                    pod/pod-subpath-test-preprovisionedpv-2q72                       Started container test-container-subpath-preprovisionedpv-2q72\nprovisioning-3380                    2m10s       Warning   ProvisioningFailed         persistentvolumeclaim/pvc-9rwmd                                  storageclass.storage.k8s.io \"provisioning-3380\" not found\nprovisioning-3542                    83s         Normal    Pulled                     pod/hostexec-bootstrap-e2e-minion-group-6tqd-5djc6               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-3542                    83s         Normal    Created                    pod/hostexec-bootstrap-e2e-minion-group-6tqd-5djc6               Created container agnhost\nprovisioning-3542                    82s         Normal    Started                    pod/hostexec-bootstrap-e2e-minion-group-6tqd-5djc6               Started container agnhost\nprovisioning-3542                    52s         Normal    Pulled                     pod/pod-subpath-test-preprovisionedpv-smzf                       Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-3542                    52s         Normal    Created                    pod/pod-subpath-test-preprovisionedpv-smzf                       Created container init-volume-preprovisionedpv-smzf\nprovisioning-3542                    52s         Normal    Started                    pod/pod-subpath-test-preprovisionedpv-smzf                       Started container init-volume-preprovisionedpv-smzf\nprovisioning-3542                    51s         Normal    Pulled                     pod/pod-subpath-test-preprovisionedpv-smzf                       Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-3542                    51s         Normal    Created                    pod/pod-subpath-test-preprovisionedpv-smzf                       Created container test-init-subpath-preprovisionedpv-smzf\nprovisioning-3542                    50s         Normal    Started                    pod/pod-subpath-test-preprovisionedpv-smzf                       Started container test-init-subpath-preprovisionedpv-smzf\nprovisioning-3542                    50s         Normal    Pulled                     pod/pod-subpath-test-preprovisionedpv-smzf                       Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-3542                    50s         Normal    Created                    pod/pod-subpath-test-preprovisionedpv-smzf                       Created container test-container-subpath-preprovisionedpv-smzf\nprovisioning-3542                    49s         Normal    Started                    pod/pod-subpath-test-preprovisionedpv-smzf                       Started container test-container-subpath-preprovisionedpv-smzf\nprovisioning-3542                    38s         Normal    Pulled                     pod/pod-subpath-test-preprovisionedpv-smzf                       Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-3542                    38s         Normal    Created                    pod/pod-subpath-test-preprovisionedpv-smzf                       Created container test-container-subpath-preprovisionedpv-smzf\nprovisioning-3542                    36s         Normal    Started                    pod/pod-subpath-test-preprovisionedpv-smzf                       Started container test-container-subpath-preprovisionedpv-smzf\nprovisioning-3542                    70s         Warning   ProvisioningFailed         persistentvolumeclaim/pvc-pldvk                                  storageclass.storage.k8s.io \"provisioning-3542\" not found\nprovisioning-3649                    37s         Normal    LeaderElection             endpoints/example.com-nfs-provisioning-3649                      external-provisioner-r6zzc_892fb61e-c6b8-4edf-bab5-aaedcc977ce3 became leader\nprovisioning-3649                    48s         Normal    Scheduled                  pod/external-provisioner-r6zzc                                   Successfully assigned provisioning-3649/external-provisioner-r6zzc to bootstrap-e2e-minion-group-w9fq\nprovisioning-3649                    44s         Normal    Pulled                     pod/external-provisioner-r6zzc                                   Container image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\" already present on machine\nprovisioning-3649                    44s         Normal    Created                    pod/external-provisioner-r6zzc                                   Created container nfs-provisioner\nprovisioning-3649                    44s         Normal    Started                    pod/external-provisioner-r6zzc                                   Started container nfs-provisioner\nprovisioning-3649                    32s         Normal    ExternalProvisioning       persistentvolumeclaim/pvc-fwf4l                                  waiting for a volume to be created, either by external provisioner \"example.com/nfs-provisioning-3649\" or manually created by system administrator\nprovisioning-3649                    32s         Normal    Provisioning               persistentvolumeclaim/pvc-fwf4l                                  External provisioner is provisioning volume for claim \"provisioning-3649/pvc-fwf4l\"\nprovisioning-3649                    32s         Normal    ProvisioningSucceeded      persistentvolumeclaim/pvc-fwf4l                                  Successfully provisioned volume pvc-dd91feae-045d-4f5f-a52c-37069f3ce39f\nprovisioning-3649                    8s          Normal    Pulled                     pod/pvc-volume-tester-reader-khx8k                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-3649                    8s          Normal    Created                    pod/pvc-volume-tester-reader-khx8k                               Created container volume-tester\nprovisioning-3649                    8s          Normal    Started                    pod/pvc-volume-tester-reader-khx8k                               Started container volume-tester\nprovisioning-3649                    32s         Warning   FailedScheduling           pod/pvc-volume-tester-writer-xl5vm                               running \"VolumeBinding\" filter plugin for pod \"pvc-volume-tester-writer-xl5vm\": pod has unbound immediate PersistentVolumeClaims\nprovisioning-3649                    31s         Normal    Scheduled                  pod/pvc-volume-tester-writer-xl5vm                               Successfully assigned provisioning-3649/pvc-volume-tester-writer-xl5vm to bootstrap-e2e-minion-group-zzr9\nprovisioning-3649                    26s         Normal    Pulled                     pod/pvc-volume-tester-writer-xl5vm                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-3649                    26s         Normal    Created                    pod/pvc-volume-tester-writer-xl5vm                               Created container volume-tester\nprovisioning-3649                    24s         Normal    Started                    pod/pvc-volume-tester-writer-xl5vm                               Started container volume-tester\nprovisioning-4051                    59s         Normal    Scheduled                  pod/gluster-server                                               Successfully assigned provisioning-4051/gluster-server to bootstrap-e2e-minion-group-6tqd\nprovisioning-4051                    58s         Normal    Pulled                     pod/gluster-server                                               Container image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\" already present on machine\nprovisioning-4051                    58s         Normal    Created                    pod/gluster-server                                               Created container gluster-server\nprovisioning-4051                    58s         Normal    Started                    pod/gluster-server                                               Started container gluster-server\nprovisioning-4051                    37s         Normal    Killing                    pod/gluster-server                                               Stopping container gluster-server\nprovisioning-4051                    53s         Normal    Scheduled                  pod/pod-subpath-test-inlinevolume-2lrv                           Successfully assigned provisioning-4051/pod-subpath-test-inlinevolume-2lrv to bootstrap-e2e-minion-group-zzr9\nprovisioning-4051                    49s         Normal    Pulled                     pod/pod-subpath-test-inlinevolume-2lrv                           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4051                    49s         Normal    Created                    pod/pod-subpath-test-inlinevolume-2lrv                           Created container test-init-subpath-inlinevolume-2lrv\nprovisioning-4051                    48s         Normal    Started                    pod/pod-subpath-test-inlinevolume-2lrv                           Started container test-init-subpath-inlinevolume-2lrv\nprovisioning-4051                    47s         Normal    Pulled                     pod/pod-subpath-test-inlinevolume-2lrv                           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4051                    46s         Normal    Created                    pod/pod-subpath-test-inlinevolume-2lrv                           Created container test-container-subpath-inlinevolume-2lrv\nprovisioning-4051                    46s         Normal    Started                    pod/pod-subpath-test-inlinevolume-2lrv                           Started container test-container-subpath-inlinevolume-2lrv\nprovisioning-4051                    46s         Normal    Pulled                     pod/pod-subpath-test-inlinevolume-2lrv                           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4051                    46s         Normal    Created                    pod/pod-subpath-test-inlinevolume-2lrv                           Created container test-container-volume-inlinevolume-2lrv\nprovisioning-4051                    45s         Normal    Started                    pod/pod-subpath-test-inlinevolume-2lrv                           Started container test-container-volume-inlinevolume-2lrv\nprovisioning-5007                    9s          Normal    Pulled                     pod/hostpath-symlink-prep-provisioning-5007                      Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-5007                    9s          Normal    Created                    pod/hostpath-symlink-prep-provisioning-5007                      Created container init-volume-provisioning-5007\nprovisioning-5007                    9s          Normal    Started                    pod/hostpath-symlink-prep-provisioning-5007                      Started container init-volume-provisioning-5007\nprovisioning-532                     17s         Normal    Scheduled                  pod/gluster-server                                               Successfully assigned provisioning-532/gluster-server to bootstrap-e2e-minion-group-w9fq\nprovisioning-532                     15s         Normal    Pulled                     pod/gluster-server                                               Container image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\" already present on machine\nprovisioning-532                     14s         Normal    Created                    pod/gluster-server                                               Created container gluster-server\nprovisioning-532                     14s         Normal    Started                    pod/gluster-server                                               Started container gluster-server\nprovisioning-532                     11s         Warning   ProvisioningFailed         persistentvolumeclaim/pvc-dqc64                                  storageclass.storage.k8s.io \"provisioning-532\" not found\nprovisioning-9087                    65s         Normal    Pulled                     pod/hostexec-bootstrap-e2e-minion-group-6tqd-9bsks               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-9087                    65s         Normal    Created                    pod/hostexec-bootstrap-e2e-minion-group-6tqd-9bsks               Created container agnhost\nprovisioning-9087                    65s         Normal    Started                    pod/hostexec-bootstrap-e2e-minion-group-6tqd-9bsks               Started container agnhost\nprovisioning-9087                    22s         Normal    Killing                    pod/hostexec-bootstrap-e2e-minion-group-6tqd-9bsks               Stopping container agnhost\nprovisioning-9087                    35s         Normal    Pulled                     pod/pod-subpath-test-preprovisionedpv-4dnq                       Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-9087                    34s         Normal    Created                    pod/pod-subpath-test-preprovisionedpv-4dnq                       Created container init-volume-preprovisionedpv-4dnq\nprovisioning-9087                    34s         Normal    Started                    pod/pod-subpath-test-preprovisionedpv-4dnq                       Started container init-volume-preprovisionedpv-4dnq\nprovisioning-9087                    34s         Normal    Pulled                     pod/pod-subpath-test-preprovisionedpv-4dnq                       Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-9087                    34s         Normal    Created                    pod/pod-subpath-test-preprovisionedpv-4dnq                       Created container test-container-subpath-preprovisionedpv-4dnq\nprovisioning-9087                    33s         Normal    Started                    pod/pod-subpath-test-preprovisionedpv-4dnq                       Started container test-container-subpath-preprovisionedpv-4dnq\nprovisioning-9087                    58s         Warning   ProvisioningFailed         persistentvolumeclaim/pvc-qnddl                                  storageclass.storage.k8s.io \"provisioning-9087\" not found\nprovisioning-9405                    31s         Normal    Pulled                     pod/hostexec-bootstrap-e2e-minion-group-d58v-dr2h2               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-9405                    31s         Normal    Created                    pod/hostexec-bootstrap-e2e-minion-group-d58v-dr2h2               Created container agnhost\nprovisioning-9405                    30s         Normal    Started                    pod/hostexec-bootstrap-e2e-minion-group-d58v-dr2h2               Started container agnhost\nprovisioning-9405                    8s          Normal    Pulled                     pod/pod-subpath-test-preprovisionedpv-869x                       Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-9405                    8s          Normal    Created                    pod/pod-subpath-test-preprovisionedpv-869x                       Created container init-volume-preprovisionedpv-869x\nprovisioning-9405                    7s          Normal    Started                    pod/pod-subpath-test-preprovisionedpv-869x                       Started container init-volume-preprovisionedpv-869x\nprovisioning-9405                    6s          Normal    Pulled                     pod/pod-subpath-test-preprovisionedpv-869x                       Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-9405                    6s          Normal    Created                    pod/pod-subpath-test-preprovisionedpv-869x                       Created container test-init-volume-preprovisionedpv-869x\nprovisioning-9405                    5s          Normal    Started                    pod/pod-subpath-test-preprovisionedpv-869x                       Started container test-init-volume-preprovisionedpv-869x\nprovisioning-9405                    5s          Normal    Pulled                     pod/pod-subpath-test-preprovisionedpv-869x                       Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-9405                    5s          Normal    Created                    pod/pod-subpath-test-preprovisionedpv-869x                       Created container test-container-subpath-preprovisionedpv-869x\nprovisioning-9405                    4s          Normal    Started                    pod/pod-subpath-test-preprovisionedpv-869x                       Started container test-container-subpath-preprovisionedpv-869x\nprovisioning-9405                    26s         Warning   ProvisioningFailed         persistentvolumeclaim/pvc-6f8wk                                  storageclass.storage.k8s.io \"provisioning-9405\" not found\npv-9750                              61s         Normal    Scheduled                  pod/pvc-tester-8k9j4                                             Successfully assigned pv-9750/pvc-tester-8k9j4 to bootstrap-e2e-minion-group-zzr9\npv-9750                              53s         Normal    SuccessfulAttachVolume     pod/pvc-tester-8k9j4                                             AttachVolume.Attach succeeded for volume \"gce-wdsfx\"\npv-9750                              47s         Normal    Pulled                     pod/pvc-tester-8k9j4                                             Container image \"docker.io/library/busybox:1.29\" already present on machine\npv-9750                              47s         Normal    Created                    pod/pvc-tester-8k9j4                                             Created container write-pod\npv-9750                              46s         Normal    Started                    pod/pvc-tester-8k9j4                                             Started container write-pod\npv-9750                              42s         Normal    Killing                    pod/pvc-tester-8k9j4                                             Stopping container write-pod\nresourcequota-5118                   63s         Warning   ProvisioningFailed         persistentvolumeclaim/test-claim                                 Failed to provision volume with StorageClass \"standard\": invalid AccessModes [ReadWriteOnce ReadOnlyMany ReadWriteMany]: only AccessModes [ReadWriteOnce ReadOnlyMany] are supported\nsched-preemption-path-5056           11s         Warning   FailedScheduling           pod/pod4                                                         0/5 nodes are available: 1 node(s) were unschedulable, 4 Insufficient example.com/fakecpu.\nsched-preemption-path-5056           1s          Normal    Scheduled                  pod/pod4                                                         Successfully assigned sched-preemption-path-5056/pod4 to bootstrap-e2e-minion-group-w9fq\nsched-preemption-path-5056           44s         Warning   FailedScheduling           pod/rs-pod1-j676r                                                0/5 nodes are available: 1 node(s) were unschedulable, 4 Insufficient example.com/fakecpu.\nsched-preemption-path-5056           41s         Normal    Scheduled                  pod/rs-pod1-j676r                                                Successfully assigned sched-preemption-path-5056/rs-pod1-j676r to bootstrap-e2e-minion-group-w9fq\nsched-preemption-path-5056           40s         Warning   FailedMount                pod/rs-pod1-j676r                                                MountVolume.SetUp failed for volume \"default-token-47fbr\" : failed to sync secret cache: timed out waiting for the condition\nsched-preemption-path-5056           37s         Normal    Pulled                     pod/rs-pod1-j676r                                                Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nsched-preemption-path-5056           37s         Normal    Created                    pod/rs-pod1-j676r                                                Created container pod1\nsched-preemption-path-5056           36s         Normal    Started                    pod/rs-pod1-j676r                                                Started container pod1\nsched-preemption-path-5056           19s         Normal    Killing                    pod/rs-pod1-j676r                                                Stopping container pod1\nsched-preemption-path-5056           19s         Normal    Preempted                  pod/rs-pod1-j676r                                                Preempted by sched-preemption-path-5056/pod4 on node bootstrap-e2e-minion-group-w9fq\nsched-preemption-path-5056           8s          Warning   FailedScheduling           pod/rs-pod1-kzgzn                                                0/5 nodes are available: 1 node(s) were unschedulable, 4 Insufficient example.com/fakecpu.\nsched-preemption-path-5056           46s         Normal    SuccessfulCreate           replicaset/rs-pod1                                               Created pod: rs-pod1-j676r\nsched-preemption-path-5056           18s         Normal    SuccessfulCreate           replicaset/rs-pod1                                               Created pod: rs-pod1-kzgzn\nsched-preemption-path-5056           33s         Normal    Scheduled                  pod/rs-pod2-r5x4z                                                Successfully assigned sched-preemption-path-5056/rs-pod2-r5x4z to bootstrap-e2e-minion-group-w9fq\nsched-preemption-path-5056           32s         Normal    Pulled                     pod/rs-pod2-r5x4z                                                Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nsched-preemption-path-5056           32s         Normal    Created                    pod/rs-pod2-r5x4z                                                Created container pod2\nsched-preemption-path-5056           31s         Normal    Started                    pod/rs-pod2-r5x4z                                                Started container pod2\nsched-preemption-path-5056           19s         Normal    Preempted                  pod/rs-pod2-r5x4z                                                Preempted by sched-preemption-path-5056/pod4 on node bootstrap-e2e-minion-group-w9fq\nsched-preemption-path-5056           19s         Normal    Killing                    pod/rs-pod2-r5x4z                                                Stopping container pod2\nsched-preemption-path-5056           9s          Warning   FailedScheduling           pod/rs-pod2-tf8vj                                                0/5 nodes are available: 1 node(s) were unschedulable, 4 Insufficient example.com/fakecpu.\nsched-preemption-path-5056           33s         Normal    SuccessfulCreate           replicaset/rs-pod2                                               Created pod: rs-pod2-r5x4z\nsched-preemption-path-5056           19s         Normal    SuccessfulCreate           replicaset/rs-pod2                                               Created pod: rs-pod2-tf8vj\nsched-preemption-path-5056           26s         Normal    Scheduled                  pod/rs-pod3-7zjmk                                                Successfully assigned sched-preemption-path-5056/rs-pod3-7zjmk to bootstrap-e2e-minion-group-w9fq\nsched-preemption-path-5056           25s         Normal    Pulled                     pod/rs-pod3-7zjmk                                                Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nsched-preemption-path-5056           25s         Normal    Created                    pod/rs-pod3-7zjmk                                                Created container pod3\nsched-preemption-path-5056           24s         Normal    Started                    pod/rs-pod3-7zjmk                                                Started container pod3\nsched-preemption-path-5056           27s         Normal    SuccessfulCreate           replicaset/rs-pod3                                               Created pod: rs-pod3-7zjmk\nsched-preemption-path-5056           53s         Normal    Scheduled                  pod/without-label                                                Successfully assigned sched-preemption-path-5056/without-label to bootstrap-e2e-minion-group-w9fq\nsched-preemption-path-5056           52s         Normal    Pulled                     pod/without-label                                                Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nsched-preemption-path-5056           52s         Normal    Created                    pod/without-label                                                Created container without-label\nsched-preemption-path-5056           51s         Normal    Started                    pod/without-label                                                Started container without-label\nsched-preemption-path-5056           47s         Normal    Killing                    pod/without-label                                                Stopping container without-label\nsecrets-2790                         38s         Normal    Scheduled                  pod/pod-secrets-f2442414-f61d-4b62-8dcc-dd46b46597b2             Successfully assigned secrets-2790/pod-secrets-f2442414-f61d-4b62-8dcc-dd46b46597b2 to bootstrap-e2e-minion-group-zzr9\nsecrets-2790                         35s         Normal    Pulled                     pod/pod-secrets-f2442414-f61d-4b62-8dcc-dd46b46597b2             Container image \"docker.io/library/busybox:1.29\" already present on machine\nsecrets-2790                         35s         Normal    Created                    pod/pod-secrets-f2442414-f61d-4b62-8dcc-dd46b46597b2             Created container secret-env-test\nsecrets-2790                         35s         Normal    Started                    pod/pod-secrets-f2442414-f61d-4b62-8dcc-dd46b46597b2             Started container secret-env-test\nsecrets-3750                         31s         Normal    Scheduled                  pod/pod-secrets-19a9e1d2-7fad-4812-9539-29329bb97c7c             Successfully assigned secrets-3750/pod-secrets-19a9e1d2-7fad-4812-9539-29329bb97c7c to bootstrap-e2e-minion-group-6tqd\nsecrets-3750                         29s         Normal    Pulled                     pod/pod-secrets-19a9e1d2-7fad-4812-9539-29329bb97c7c             Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nsecrets-3750                         29s         Normal    Created                    pod/pod-secrets-19a9e1d2-7fad-4812-9539-29329bb97c7c             Created container secret-volume-test\nsecrets-3750                         28s         Normal    Started                    pod/pod-secrets-19a9e1d2-7fad-4812-9539-29329bb97c7c             Started container secret-volume-test\nsecurity-context-test-431            54s         Normal    Scheduled                  pod/alpine-nnp-false-243a7f50-7260-4028-8f1f-71d78e547b8f        Successfully assigned security-context-test-431/alpine-nnp-false-243a7f50-7260-4028-8f1f-71d78e547b8f to bootstrap-e2e-minion-group-zzr9\nsecurity-context-test-431            50s         Normal    Pulling                    pod/alpine-nnp-false-243a7f50-7260-4028-8f1f-71d78e547b8f        Pulling image \"gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0\"\nsecurity-context-test-431            49s         Normal    Pulled                     pod/alpine-nnp-false-243a7f50-7260-4028-8f1f-71d78e547b8f        Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0\"\nsecurity-context-test-431            48s         Normal    Created                    pod/alpine-nnp-false-243a7f50-7260-4028-8f1f-71d78e547b8f        Created container alpine-nnp-false-243a7f50-7260-4028-8f1f-71d78e547b8f\nsecurity-context-test-431            48s         Normal    Started                    pod/alpine-nnp-false-243a7f50-7260-4028-8f1f-71d78e547b8f        Started container alpine-nnp-false-243a7f50-7260-4028-8f1f-71d78e547b8f\nsecurity-context-test-723            33s         Normal    Scheduled                  pod/busybox-readonly-true-1bfac432-05e6-43c8-af05-88b831ffc1b0   Successfully assigned security-context-test-723/busybox-readonly-true-1bfac432-05e6-43c8-af05-88b831ffc1b0 to bootstrap-e2e-minion-group-zzr9\nsecurity-context-test-723            28s         Normal    Pulled                     pod/busybox-readonly-true-1bfac432-05e6-43c8-af05-88b831ffc1b0   Container image \"docker.io/library/busybox:1.29\" already present on machine\nsecurity-context-test-723            28s         Normal    Created                    pod/busybox-readonly-true-1bfac432-05e6-43c8-af05-88b831ffc1b0   Created container busybox-readonly-true-1bfac432-05e6-43c8-af05-88b831ffc1b0\nsecurity-context-test-723            27s         Normal    Started                    pod/busybox-readonly-true-1bfac432-05e6-43c8-af05-88b831ffc1b0   Started container busybox-readonly-true-1bfac432-05e6-43c8-af05-88b831ffc1b0\nservices-2515                        20s         Normal    Scheduled                  pod/execpod7dfjh                                                 Successfully assigned services-2515/execpod7dfjh to bootstrap-e2e-minion-group-6tqd\nservices-2515                        16s         Normal    Pulled                     pod/execpod7dfjh                                                 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-2515                        16s         Normal    Created                    pod/execpod7dfjh                                                 Created container agnhost-pause\nservices-2515                        15s         Normal    Started                    pod/execpod7dfjh                                                 Started container agnhost-pause\nservices-2515                        24s         Normal    Scheduled                  pod/externalname-service-9kz8f                                   Successfully assigned services-2515/externalname-service-9kz8f to bootstrap-e2e-minion-group-6tqd\nservices-2515                        23s         Normal    Pulled                     pod/externalname-service-9kz8f                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-2515                        23s         Normal    Created                    pod/externalname-service-9kz8f                                   Created container externalname-service\nservices-2515                        22s         Normal    Started                    pod/externalname-service-9kz8f                                   Started container externalname-service\nservices-2515                        24s         Normal    Scheduled                  pod/externalname-service-h25tx                                   Successfully assigned services-2515/externalname-service-h25tx to bootstrap-e2e-minion-group-w9fq\nservices-2515                        22s         Normal    Pulled                     pod/externalname-service-h25tx                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-2515                        22s         Normal    Created                    pod/externalname-service-h25tx                                   Created container externalname-service\nservices-2515                        22s         Normal    Started                    pod/externalname-service-h25tx                                   Started container externalname-service\nservices-2515                        24s         Normal    SuccessfulCreate           replicationcontroller/externalname-service                       Created pod: externalname-service-9kz8f\nservices-2515                        24s         Normal    SuccessfulCreate           replicationcontroller/externalname-service                       Created pod: externalname-service-h25tx\nservices-2515                        21s         Warning   FailedToUpdateEndpoint     endpoints/externalname-service                                   Failed to update endpoint services-2515/externalname-service: Operation cannot be fulfilled on endpoints \"externalname-service\": the object has been modified; please apply your changes to the latest version and try again\nservices-4906                        35s         Normal    Scheduled                  pod/execpod-wwlql                                                Successfully assigned services-4906/execpod-wwlql to bootstrap-e2e-minion-group-zzr9\nservices-4906                        34s         Normal    Pulled                     pod/execpod-wwlql                                                Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-4906                        34s         Normal    Created                    pod/execpod-wwlql                                                Created container agnhost-pause\nservices-4906                        32s         Normal    Started                    pod/execpod-wwlql                                                Started container agnhost-pause\nservices-4906                        46s         Normal    Scheduled                  pod/slow-terminating-unready-pod-klqkd                           Successfully assigned services-4906/slow-terminating-unready-pod-klqkd to bootstrap-e2e-minion-group-6tqd\nservices-4906                        43s         Normal    Pulled                     pod/slow-terminating-unready-pod-klqkd                           Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-4906                        43s         Normal    Created                    pod/slow-terminating-unready-pod-klqkd                           Created container slow-terminating-unready-pod\nservices-4906                        42s         Normal    Started                    pod/slow-terminating-unready-pod-klqkd                           Started container slow-terminating-unready-pod\nservices-4906                        5s          Warning   Unhealthy                  pod/slow-terminating-unready-pod-klqkd                           Readiness probe failed:\nservices-4906                        18s         Normal    Killing                    pod/slow-terminating-unready-pod-klqkd                           Stopping container slow-terminating-unready-pod\nservices-4906                        46s         Normal    SuccessfulCreate           replicationcontroller/slow-terminating-unready-pod               Created pod: slow-terminating-unready-pod-klqkd\nservices-4906                        18s         Normal    SuccessfulDelete           replicationcontroller/slow-terminating-unready-pod               Deleted pod: slow-terminating-unready-pod-klqkd\nsubpath-2036                         38s         Normal    Scheduled                  pod/pod-subpath-test-secret-m5xq                                 Successfully assigned subpath-2036/pod-subpath-test-secret-m5xq to bootstrap-e2e-minion-group-zzr9\nsubpath-2036                         35s         Normal    Pulled                     pod/pod-subpath-test-secret-m5xq                                 Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nsubpath-2036                         35s         Normal    Created                    pod/pod-subpath-test-secret-m5xq                                 Created container test-container-subpath-secret-m5xq\nsubpath-2036                         35s         Normal    Started                    pod/pod-subpath-test-secret-m5xq                                 Started container test-container-subpath-secret-m5xq\nsysctl-1255                          68s         Normal    Scheduled                  pod/sysctl-497fb630-456d-411b-bb02-dd430e1bc51d                  Successfully assigned sysctl-1255/sysctl-497fb630-456d-411b-bb02-dd430e1bc51d to bootstrap-e2e-minion-group-w9fq\nsysctl-1255                          65s         Normal    Pulled                     pod/sysctl-497fb630-456d-411b-bb02-dd430e1bc51d                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nsysctl-1255                          65s         Normal    Created                    pod/sysctl-497fb630-456d-411b-bb02-dd430e1bc51d                  Created container test-container\nsysctl-1255                          65s         Normal    Started                    pod/sysctl-497fb630-456d-411b-bb02-dd430e1bc51d                  Started container test-container\nsysctl-8370                          21s         Normal    Scheduled                  pod/sysctl-53498c3d-d83d-420d-a7a1-544d0af63cab                  Successfully assigned sysctl-8370/sysctl-53498c3d-d83d-420d-a7a1-544d0af63cab to bootstrap-e2e-minion-group-zzr9\nsysctl-8370                          20s         Warning   FailedMount                pod/sysctl-53498c3d-d83d-420d-a7a1-544d0af63cab                  MountVolume.SetUp failed for volume \"default-token-rshdx\" : failed to sync secret cache: timed out waiting for the condition\nsysctl-8370                          15s         Normal    Pulled                     pod/sysctl-53498c3d-d83d-420d-a7a1-544d0af63cab                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nsysctl-8370                          15s         Normal    Created                    pod/sysctl-53498c3d-d83d-420d-a7a1-544d0af63cab                  Created container test-container\nsysctl-8370                          13s         Normal    Started                    pod/sysctl-53498c3d-d83d-420d-a7a1-544d0af63cab                  Started container test-container\nvolume-3106                          75s         Normal    Pulled                     pod/hostexec-bootstrap-e2e-minion-group-d58v-82wnd               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolume-3106                          75s         Normal    Created                    pod/hostexec-bootstrap-e2e-minion-group-d58v-82wnd               Created container agnhost\nvolume-3106                          75s         Normal    Started                    pod/hostexec-bootstrap-e2e-minion-group-d58v-82wnd               Started container agnhost\nvolume-3106                          23s         Normal    Pulled                     pod/local-client                                                 Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-3106                          23s         Normal    Created                    pod/local-client                                                 Created container local-client\nvolume-3106                          23s         Normal    Started                    pod/local-client                                                 Started container local-client\nvolume-3106                          16s         Normal    Killing                    pod/local-client                                                 Stopping container local-client\nvolume-3106                          52s         Normal    Pulled                     pod/local-injector                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-3106                          52s         Normal    Created                    pod/local-injector                                               Created container local-injector\nvolume-3106                          52s         Normal    Started                    pod/local-injector                                               Started container local-injector\nvolume-3106                          35s         Normal    Killing                    pod/local-injector                                               Stopping container local-injector\nvolume-3106                          67s         Warning   ProvisioningFailed         persistentvolumeclaim/pvc-2bdjg                                  storageclass.storage.k8s.io \"volume-3106\" not found\nvolume-3793                          22s         Normal    Scheduled                  pod/gcepd-client                                                 Successfully assigned volume-3793/gcepd-client to bootstrap-e2e-minion-group-6tqd\nvolume-3793                          16s         Warning   FailedAttachVolume         pod/gcepd-client                                                 AttachVolume.Attach failed for volume \"gcepd-volume-0\" : googleapi: Error 400: RESOURCE_IN_USE_BY_ANOTHER_RESOURCE - The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-0abd6273-de24-4e35-a307-237b84f82f4e' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-w9fq'\nvolume-3793                          2s          Normal    SuccessfulAttachVolume     pod/gcepd-client                                                 AttachVolume.Attach succeeded for volume \"gcepd-volume-0\"\nvolume-3793                          56s         Normal    Scheduled                  pod/gcepd-injector                                               Successfully assigned volume-3793/gcepd-injector to bootstrap-e2e-minion-group-w9fq\nvolume-3793                          50s         Normal    SuccessfulAttachVolume     pod/gcepd-injector                                               AttachVolume.Attach succeeded for volume \"gcepd-volume-0\"\nvolume-3793                          44s         Normal    Pulled                     pod/gcepd-injector                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-3793                          44s         Normal    Created                    pod/gcepd-injector                                               Created container gcepd-injector\nvolume-3793                          44s         Normal    Started                    pod/gcepd-injector                                               Started container gcepd-injector\nvolume-3793                          29s         Normal    Killing                    pod/gcepd-injector                                               Stopping container gcepd-injector\nvolume-4540                          71s         Normal    Scheduled                  pod/exec-volume-test-preprovisionedpv-qbsd                       Successfully assigned volume-4540/exec-volume-test-preprovisionedpv-qbsd to bootstrap-e2e-minion-group-zzr9\nvolume-4540                          64s         Normal    SuccessfulAttachVolume     pod/exec-volume-test-preprovisionedpv-qbsd                       AttachVolume.Attach succeeded for volume \"gcepd-cpq2r\"\nvolume-4540                          59s         Normal    Pulling                    pod/exec-volume-test-preprovisionedpv-qbsd                       Pulling image \"docker.io/library/nginx:1.14-alpine\"\nvolume-4540                          58s         Normal    Pulled                     pod/exec-volume-test-preprovisionedpv-qbsd                       Successfully pulled image \"docker.io/library/nginx:1.14-alpine\"\nvolume-4540                          58s         Normal    Created                    pod/exec-volume-test-preprovisionedpv-qbsd                       Created container exec-container-preprovisionedpv-qbsd\nvolume-4540                          57s         Normal    Started                    pod/exec-volume-test-preprovisionedpv-qbsd                       Started container exec-container-preprovisionedpv-qbsd\nvolume-4540                          78s         Warning   ProvisioningFailed         persistentvolumeclaim/pvc-bvrjb                                  storageclass.storage.k8s.io \"volume-4540\" not found\nvolumemode-1350                      62s         Normal    WaitForFirstConsumer       persistentvolumeclaim/gcepd5vn74                                 waiting for first consumer to be created before binding\nvolumemode-1350                      58s         Normal    ProvisioningSucceeded      persistentvolumeclaim/gcepd5vn74                                 Successfully provisioned volume pvc-94673b27-ba04-401a-b197-b057b79b6632 using kubernetes.io/gce-pd\nvolumemode-1350                      51s         Normal    Pulled                     pod/hostexec-bootstrap-e2e-minion-group-6tqd-mqnfr               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolumemode-1350                      51s         Normal    Created                    pod/hostexec-bootstrap-e2e-minion-group-6tqd-mqnfr               Created container agnhost\nvolumemode-1350                      51s         Normal    Started                    pod/hostexec-bootstrap-e2e-minion-group-6tqd-mqnfr               Started container agnhost\nvolumemode-1350                      43s         Normal    Killing                    pod/hostexec-bootstrap-e2e-minion-group-6tqd-mqnfr               Stopping container agnhost\nvolumemode-1350                      57s         Normal    Scheduled                  pod/security-context-5480eee5-4bb9-4a0f-a241-33c8905c3b77        Successfully assigned volumemode-1350/security-context-5480eee5-4bb9-4a0f-a241-33c8905c3b77 to bootstrap-e2e-minion-group-6tqd\nvolumemode-1350                      55s         Normal    Pulled                     pod/security-context-5480eee5-4bb9-4a0f-a241-33c8905c3b77        Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolumemode-1350                      55s         Normal    Created                    pod/security-context-5480eee5-4bb9-4a0f-a241-33c8905c3b77        Created container write-pod\nvolumemode-1350                      54s         Normal    Started                    pod/security-context-5480eee5-4bb9-4a0f-a241-33c8905c3b77        Started container write-pod\nvolumemode-1350                      51s         Normal    SuccessfulAttachVolume     pod/security-context-5480eee5-4bb9-4a0f-a241-33c8905c3b77        AttachVolume.Attach succeeded for volume \"pvc-94673b27-ba04-401a-b197-b057b79b6632\"\nvolumemode-1350                      42s         Normal    Killing                    pod/security-context-5480eee5-4bb9-4a0f-a241-33c8905c3b77        Stopping container write-pod\nvolumemode-5518                      12s         Normal    Scheduled                  pod/gluster-server                                               Successfully assigned volumemode-5518/gluster-server to bootstrap-e2e-minion-group-6tqd\nvolumemode-5518                      11s         Normal    Pulled                     pod/gluster-server                                               Container image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\" already present on machine\nvolumemode-5518                      11s         Normal    Created                    pod/gluster-server                                               Created container gluster-server\nvolumemode-5518                      10s         Normal    Started                    pod/gluster-server                                               Started container gluster-server\nvolumemode-5518                      7s          Warning   ProvisioningFailed         persistentvolumeclaim/pvc-thtt5                                  storageclass.storage.k8s.io \"volumemode-5518\" not found\nvolumemode-8010                      21s         Normal    Pulled                     pod/hostexec-bootstrap-e2e-minion-group-d58v-j28kh               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolumemode-8010                      21s         Normal    Created                    pod/hostexec-bootstrap-e2e-minion-group-d58v-j28kh               Created container agnhost\nvolumemode-8010                      21s         Normal    Started                    pod/hostexec-bootstrap-e2e-minion-group-d58v-j28kh               Started container agnhost\nvolumemode-8010                      9s          Normal    Killing                    pod/hostexec-bootstrap-e2e-minion-group-d58v-j28kh               Stopping container agnhost\nvolumemode-8010                      47s         Normal    Pulled                     pod/hostexec-bootstrap-e2e-minion-group-d58v-ss69n               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolumemode-8010                      47s         Normal    Created                    pod/hostexec-bootstrap-e2e-minion-group-d58v-ss69n               Created container agnhost\nvolumemode-8010                      47s         Normal    Started                    pod/hostexec-bootstrap-e2e-minion-group-d58v-ss69n               Started container agnhost\nvolumemode-8010                      34s         Warning   ProvisioningFailed         persistentvolumeclaim/pvc-qt8lm                                  storageclass.storage.k8s.io \"volumemode-8010\" not found\nvolumemode-8010                      26s         Normal    Scheduled                  pod/security-context-60668457-80cb-467b-99ff-bb82d58250d8        Successfully assigned volumemode-8010/security-context-60668457-80cb-467b-99ff-bb82d58250d8 to bootstrap-e2e-minion-group-d58v\nvolumemode-8010                      24s         Normal    Pulled                     pod/security-context-60668457-80cb-467b-99ff-bb82d58250d8        Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolumemode-8010                      24s         Normal    Created                    pod/security-context-60668457-80cb-467b-99ff-bb82d58250d8        Created container write-pod\nvolumemode-8010                      24s         Normal    Started                    pod/security-context-60668457-80cb-467b-99ff-bb82d58250d8        Started container write-pod\nvolumemode-8010                      9s          Normal    Killing                    pod/security-context-60668457-80cb-467b-99ff-bb82d58250d8        Stopping container write-pod\n"
Jan 17 00:03:27.650: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.203.169.247 --kubeconfig=/workspace/.kube/config get secrets --all-namespaces'
Jan 17 00:03:28.295: INFO: stderr: ""
Jan 17 00:03:28.296: INFO: stdout: "NAMESPACE                            NAME                                                   TYPE                                  DATA   AGE\napparmor-3326                        default-token-f2cv2                                    kubernetes.io/service-account-token   3      61s\nclientset-3493                       default-token-mgj66                                    kubernetes.io/service-account-token   3      35s\nconfigmap-8878                       default-token-g6bs5                                    kubernetes.io/service-account-token   3      83s\ncontainer-probe-251                  default-token-kr7j2                                    kubernetes.io/service-account-token   3      90s\ncontainer-probe-2757                 default-token-xnm9m                                    kubernetes.io/service-account-token   3      112s\ncontainer-probe-6927                 default-token-p6pqg                                    kubernetes.io/service-account-token   3      3m47s\ncontainers-483                       default-token-bknpx                                    kubernetes.io/service-account-token   3      61s\ncrd-webhook-1280                     default-token-x72tc                                    kubernetes.io/service-account-token   3      69s\ncsi-mock-volumes-5764                csi-attacher-token-zwsxh                               kubernetes.io/service-account-token   3      100s\ncsi-mock-volumes-5764                csi-mock-token-cm6vw                                   kubernetes.io/service-account-token   3      92s\ncsi-mock-volumes-5764                csi-provisioner-token-k7bwm                            kubernetes.io/service-account-token   3      98s\ncsi-mock-volumes-5764                csi-resizer-token-xzkjb                                kubernetes.io/service-account-token   3      94s\ncsi-mock-volumes-5764                default-token-tx2dz                                    kubernetes.io/service-account-token   3      104s\ncsi-mock-volumes-9568                csi-attacher-token-fgv9z                               kubernetes.io/service-account-token   3      112s\ncsi-mock-volumes-9568                csi-mock-token-h6ltc                                   kubernetes.io/service-account-token   3      105s\ncsi-mock-volumes-9568                csi-provisioner-token-h9247                            kubernetes.io/service-account-token   3      110s\ncsi-mock-volumes-9568                csi-resizer-token-nqvdf                                kubernetes.io/service-account-token   3      108s\ncsi-mock-vol



... skipping 26001 lines ...
• [SLOW TEST:28.703 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":15,"skipped":113,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
... skipping 67 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (ext3)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume","total":-1,"completed":24,"skipped":81,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:11:35.131: INFO: Only supported for providers [openstack] (not gce)
... skipping 35 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:11:34.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-6635" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":12,"skipped":52,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:11:35.289: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 103 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1054
    should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1099
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":19,"skipped":80,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:11:35.473: INFO: Distro gci doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:11:35.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 79 lines ...
• [SLOW TEST:65.744 seconds]
[k8s.io] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":52,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:101.889 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":47,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:11:41.336: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 41 lines ...
Jan 17 00:10:57.087: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8165.svc.cluster.local from pod dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289: the server could not find the requested resource (get pods dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289)
Jan 17 00:10:57.356: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8165.svc.cluster.local from pod dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289: the server could not find the requested resource (get pods dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289)
Jan 17 00:11:00.171: INFO: Unable to read jessie_udp@dns-test-service.dns-8165.svc.cluster.local from pod dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289: the server could not find the requested resource (get pods dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289)
Jan 17 00:11:00.579: INFO: Unable to read jessie_tcp@dns-test-service.dns-8165.svc.cluster.local from pod dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289: the server could not find the requested resource (get pods dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289)
Jan 17 00:11:00.885: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8165.svc.cluster.local from pod dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289: the server could not find the requested resource (get pods dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289)
Jan 17 00:11:01.164: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8165.svc.cluster.local from pod dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289: the server could not find the requested resource (get pods dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289)
Jan 17 00:11:02.652: INFO: Lookups using dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289 failed for: [wheezy_udp@dns-test-service.dns-8165.svc.cluster.local wheezy_tcp@dns-test-service.dns-8165.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8165.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8165.svc.cluster.local jessie_udp@dns-test-service.dns-8165.svc.cluster.local jessie_tcp@dns-test-service.dns-8165.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8165.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8165.svc.cluster.local]

Jan 17 00:11:07.992: INFO: Unable to read wheezy_udp@dns-test-service.dns-8165.svc.cluster.local from pod dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289: the server could not find the requested resource (get pods dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289)
Jan 17 00:11:08.245: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8165.svc.cluster.local from pod dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289: the server could not find the requested resource (get pods dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289)
Jan 17 00:11:08.506: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8165.svc.cluster.local from pod dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289: the server could not find the requested resource (get pods dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289)
Jan 17 00:11:08.706: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8165.svc.cluster.local from pod dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289: the server could not find the requested resource (get pods dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289)
Jan 17 00:11:11.919: INFO: Unable to read jessie_udp@dns-test-service.dns-8165.svc.cluster.local from pod dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289: the server could not find the requested resource (get pods dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289)
Jan 17 00:11:12.713: INFO: Unable to read jessie_tcp@dns-test-service.dns-8165.svc.cluster.local from pod dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289: the server could not find the requested resource (get pods dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289)
Jan 17 00:11:12.885: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8165.svc.cluster.local from pod dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289: the server could not find the requested resource (get pods dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289)
Jan 17 00:11:13.082: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8165.svc.cluster.local from pod dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289: the server could not find the requested resource (get pods dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289)
Jan 17 00:11:14.633: INFO: Lookups using dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289 failed for: [wheezy_udp@dns-test-service.dns-8165.svc.cluster.local wheezy_tcp@dns-test-service.dns-8165.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8165.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8165.svc.cluster.local jessie_udp@dns-test-service.dns-8165.svc.cluster.local jessie_tcp@dns-test-service.dns-8165.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8165.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8165.svc.cluster.local]

Jan 17 00:11:18.451: INFO: Unable to read wheezy_udp@dns-test-service.dns-8165.svc.cluster.local from pod dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289: the server could not find the requested resource (get pods dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289)
Jan 17 00:11:18.846: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8165.svc.cluster.local from pod dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289: the server could not find the requested resource (get pods dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289)
Jan 17 00:11:19.274: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8165.svc.cluster.local from pod dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289: the server could not find the requested resource (get pods dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289)
Jan 17 00:11:19.686: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8165.svc.cluster.local from pod dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289: the server could not find the requested resource (get pods dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289)
Jan 17 00:11:22.063: INFO: Unable to read jessie_udp@dns-test-service.dns-8165.svc.cluster.local from pod dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289: the server could not find the requested resource (get pods dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289)
Jan 17 00:11:22.324: INFO: Unable to read jessie_tcp@dns-test-service.dns-8165.svc.cluster.local from pod dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289: the server could not find the requested resource (get pods dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289)
Jan 17 00:11:22.597: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8165.svc.cluster.local from pod dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289: the server could not find the requested resource (get pods dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289)
Jan 17 00:11:25.879: INFO: Lookups using dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289 failed for: [wheezy_udp@dns-test-service.dns-8165.svc.cluster.local wheezy_tcp@dns-test-service.dns-8165.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8165.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8165.svc.cluster.local jessie_udp@dns-test-service.dns-8165.svc.cluster.local jessie_tcp@dns-test-service.dns-8165.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8165.svc.cluster.local]

Jan 17 00:11:36.147: INFO: DNS probes using dns-8165/dns-test-6b1a0741-2d79-48a7-a91e-3232b6fc6289 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 6 lines ...
• [SLOW TEST:58.941 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":20,"skipped":119,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:11:41.904: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 95 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":14,"skipped":124,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:11:42.140: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:11:42.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 38 lines ...
• [SLOW TEST:7.156 seconds]
[sig-api-machinery] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34
  should patch a secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:141
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret","total":-1,"completed":15,"skipped":51,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:11:48.498: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:11:48.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 89 lines ...
• [SLOW TEST:41.650 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":16,"skipped":80,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:11:50.262: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:11:50.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 45 lines ...
• [SLOW TEST:11.743 seconds]
[sig-storage] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":54,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:11:50.768: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:11:50.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 129 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and read from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":24,"skipped":110,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
... skipping 135 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":19,"skipped":92,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:11:54.121: INFO: Driver cinder doesn't support ext4 -- skipping
... skipping 157 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":22,"skipped":105,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:11:58.920: INFO: Driver azure-disk doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:11:58.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 60 lines ...
• [SLOW TEST:28.519 seconds]
[sig-apps] ReplicationController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a private image
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:53
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a private image","total":-1,"completed":20,"skipped":136,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 56 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":15,"skipped":95,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 23 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332

      Driver "local" does not provide raw block - skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:101
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":11,"skipped":73,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:11:19.779: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-8362
... skipping 228 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":19,"skipped":87,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:05.186: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 37 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:12:04.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3084" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":23,"skipped":108,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:05.427: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 41 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : secret
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":16,"skipped":71,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 18 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:12:06.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-8191" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":24,"skipped":111,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 53 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":17,"skipped":87,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:07.513: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 99 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:12:08.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podsecuritypolicy-8041" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] PodSecurityPolicy should forbid pod creation when no PSP is available","total":-1,"completed":25,"skipped":116,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 159 lines ...
Jan 17 00:11:44.244: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.203.169.247 --kubeconfig=/workspace/.kube/config create -f - --namespace=kubectl-8987'
Jan 17 00:11:45.163: INFO: stderr: ""
Jan 17 00:11:45.163: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Jan 17 00:11:45.163: INFO: Waiting for all frontend pods to be Running.
Jan 17 00:11:55.363: INFO: Waiting for frontend to serve content.
Jan 17 00:11:57.763: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 
Jan 17 00:12:03.412: INFO: Trying to add a new entry to the guestbook.
Jan 17 00:12:03.752: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan 17 00:12:04.141: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.203.169.247 --kubeconfig=/workspace/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8987'
Jan 17 00:12:05.743: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 17 00:12:05.743: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
... skipping 28 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:388
    should create and stop a working application  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":25,"skipped":91,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:09.460: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 37 lines ...
Jan 17 00:12:03.714: INFO: Got stdout from 34.83.181.121:22: Hello from prow@bootstrap-e2e-minion-group-zzr9
STEP: SSH'ing to 1 nodes and running echo "foo" | grep "bar"
STEP: SSH'ing to 1 nodes and running echo "stdout" && echo "stderr" >&2 && exit 7
Jan 17 00:12:04.652: INFO: Got stdout from 34.82.20.215:22: stdout
Jan 17 00:12:04.652: INFO: Got stderr from 34.82.20.215:22: stderr
STEP: SSH'ing to a nonexistent host
error dialing prow@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying
[AfterEach] [k8s.io] [sig-node] SSH
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:12:09.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ssh-8110" for this suite.


• [SLOW TEST:11.068 seconds]
[k8s.io] [sig-node] SSH
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should SSH to all nodes and run commands
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] SSH should SSH to all nodes and run commands","total":-1,"completed":21,"skipped":138,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:10.022: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 111 lines ...
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":10,"skipped":47,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:10.030: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:12:10.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 176 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:419
    should expand volume by restarting pod if attach=off, nodeExpansion=on
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:448
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":16,"skipped":110,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:11.684: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 60 lines ...
• [SLOW TEST:18.068 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":17,"skipped":88,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:11.851: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:12:11.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 137 lines ...
• [SLOW TEST:9.176 seconds]
[sig-storage] Projected configMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":92,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 56 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":21,"skipped":126,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:17.170: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:12:17.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 53 lines ...
STEP: Creating the service on top of the pods in kubernetes
Jan 17 00:11:14.638: INFO: Service node-port-service in namespace nettest-9783 found.
Jan 17 00:11:15.992: INFO: Service session-affinity-service in namespace nettest-9783 found.
STEP: dialing(http) 34.82.20.215 (node) --> 10.0.95.59:80 (config.clusterIP)
Jan 17 00:11:16.819: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.0.95.59:80/hostName | grep -v '^\s*$'] Namespace:nettest-9783 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 00:11:16.819: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:11:20.548: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.0.95.59:80/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jan 17 00:11:20.548: INFO: Waiting for [netserver-0 netserver-1 netserver-2 netserver-3] endpoints (expected=[netserver-0 netserver-1 netserver-2 netserver-3], actual=[])
Jan 17 00:11:22.714: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.0.95.59:80/hostName | grep -v '^\s*$'] Namespace:nettest-9783 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 00:11:22.715: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:11:24.591: INFO: Waiting for [netserver-0 netserver-1 netserver-2] endpoints (expected=[netserver-0 netserver-1 netserver-2 netserver-3], actual=[netserver-3])
Jan 17 00:11:27.004: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.0.95.59:80/hostName | grep -v '^\s*$'] Namespace:nettest-9783 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 00:11:27.004: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 46 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  Granular Checks: Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:161
    should function for node-Service: http
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:181
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for node-Service: http","total":-1,"completed":17,"skipped":103,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:19.076: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 38 lines ...
• [SLOW TEST:9.146 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: too few pods, replicaSet, percentage => should not allow an eviction
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:151
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage =\u003e should not allow an eviction","total":-1,"completed":22,"skipped":145,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:19.180: INFO: Only supported for providers [aws] (not gce)
... skipping 56 lines ...
• [SLOW TEST:29.248 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":15,"skipped":58,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:20.029: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 39 lines ...
• [SLOW TEST:7.187 seconds]
[k8s.io] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":152,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:26.378: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:12:26.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 35 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a read only busybox container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":60,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:28.441: INFO: Driver vsphere doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:12:28.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 69 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a container with runAsNonRoot
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:98
    should run with an explicit non-root user ID [LinuxOnly]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:123
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":18,"skipped":100,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:30.185: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 70 lines ...
• [SLOW TEST:19.083 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":117,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:30.773: INFO: Driver vsphere doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:12:30.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 244 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:525
    should return command exit codes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:645
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes","total":-1,"completed":16,"skipped":115,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 74 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":16,"skipped":106,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:37.074: INFO: Driver gluster doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:12:37.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 86 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":18,"skipped":120,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 44 lines ...
Jan 17 00:12:08.948: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:12:12.476: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/0] Namespace:volume-8709 PodName:gcepd-client ContainerName:gcepd-client Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 00:12:12.476: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: cleaning the environment after gcepd
Jan 17 00:12:15.423: INFO: Deleting pod "gcepd-client" in namespace "volume-8709"
Jan 17 00:12:16.397: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
Jan 17 00:12:24.433: INFO: error deleting PD "bootstrap-e2e-045dec5f-3467-49db-a3a4-2d82725d71df": googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-045dec5f-3467-49db-a3a4-2d82725d71df' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-zzr9', resourceInUseByAnotherResource
Jan 17 00:12:24.433: INFO: Couldn't delete PD "bootstrap-e2e-045dec5f-3467-49db-a3a4-2d82725d71df", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-045dec5f-3467-49db-a3a4-2d82725d71df' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-zzr9', resourceInUseByAnotherResource
Jan 17 00:12:30.912: INFO: error deleting PD "bootstrap-e2e-045dec5f-3467-49db-a3a4-2d82725d71df": googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-045dec5f-3467-49db-a3a4-2d82725d71df' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-zzr9', resourceInUseByAnotherResource
Jan 17 00:12:30.912: INFO: Couldn't delete PD "bootstrap-e2e-045dec5f-3467-49db-a3a4-2d82725d71df", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-045dec5f-3467-49db-a3a4-2d82725d71df' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-zzr9', resourceInUseByAnotherResource
Jan 17 00:12:38.245: INFO: Successfully deleted PD "bootstrap-e2e-045dec5f-3467-49db-a3a4-2d82725d71df".
Jan 17 00:12:38.245: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:12:38.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8709" for this suite.
... skipping 180 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:12:40.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "clientset-7435" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":24,"skipped":153,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create v1beta1 cronJobs, delete cronJobs, watch cronJobs","total":-1,"completed":17,"skipped":107,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:40.256: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:12:40.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 68 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with privileged
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:226
    should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:276
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":18,"skipped":132,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:40.379: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:12:40.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 56 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:173

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":22,"skipped":88,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:12:38.610: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 48 lines ...
  Only supported for providers [vsphere] (not gce)

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere/vsphere_zone_support.go:106
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":20,"skipped":88,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:12:07.730: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 44 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":21,"skipped":88,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:45.755: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:12:45.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 3 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 38 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl apply
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:922
    should reuse port when apply to an existing SVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:937
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC","total":-1,"completed":25,"skipped":156,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:47.066: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 151 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should perform rolling updates and roll backs of template modifications [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":21,"skipped":127,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:48.284: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:12:48.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 108 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":20,"skipped":98,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:49.051: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:12:49.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 59 lines ...
• [SLOW TEST:10.342 seconds]
[k8s.io] InitContainer [NodeConformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should invoke init containers on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":17,"skipped":139,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 38 lines ...
• [SLOW TEST:34.759 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":22,"skipped":128,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 81 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":16,"skipped":73,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:52.346: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:12:52.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 69 lines ...
• [SLOW TEST:12.431 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:105
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":19,"skipped":141,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:52.830: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 193 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":18,"skipped":92,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:12:59.399: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 43 lines ...
• [SLOW TEST:93.174 seconds]
[sig-storage] Projected secret
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":126,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:13:05.681: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 180 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":15,"skipped":131,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:18.670 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":23,"skipped":133,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:13:10.615: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:13:10.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 208 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":15,"skipped":113,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:11:49.758: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-9153
... skipping 132 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":26,"skipped":119,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:13:21.970: INFO: Driver hostPathSymlink doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:13:21.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 135 lines ...
• [SLOW TEST:17.024 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":133,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 64 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume one after the other
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":16,"skipped":96,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:13:23.214: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 50 lines ...
• [SLOW TEST:16.038 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":134,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 67 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume at the same time
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:243
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":25,"skipped":111,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:13:24.617: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:13:24.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 65 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:13:22.718: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-1582
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail when exceeds active deadline
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:137
STEP: Creating a job
STEP: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:13:26.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-1582" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":23,"skipped":135,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 147 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and read from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":18,"skipped":115,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:13:34.663: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Only supported for providers [vsphere] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1383
------------------------------
... skipping 23 lines ...
• [SLOW TEST:123.463 seconds]
[sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not emit unexpected warnings
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:172
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not emit unexpected warnings","total":-1,"completed":13,"skipped":68,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:13:38.769: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:13:38.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 40 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when running a container with a new image
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:263
      should not be able to pull from private registry without secret [NodeConformance]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:380
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":26,"skipped":118,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:13:39.676: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 109 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":20,"skipped":91,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 139 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":18,"skipped":109,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:13:42.655: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 59 lines ...
• [SLOW TEST:19.770 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should block an eviction until the PDB is updated to allow it
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:202
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it","total":-1,"completed":24,"skipped":140,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:13:47.416: INFO: Driver vsphere doesn't support ntfs -- skipping
... skipping 260 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and write from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:233
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":21,"skipped":102,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
... skipping 47 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      Verify if offline PVC expansion works
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:162
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":26,"skipped":94,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:13:48.203: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:13:48.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 82 lines ...
      Driver local doesn't support ext4 -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:153
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":16,"skipped":53,"failed":0}
[BeforeEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:13:14.930: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-3612
... skipping 16 lines ...
• [SLOW TEST:34.500 seconds]
[sig-auth] ServiceAccounts
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":17,"skipped":53,"failed":0}

S
------------------------------
[BeforeEach] [sig-windows] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28
Jan 17 00:13:49.433: INFO: Only supported for node OS distro [windows] (not gci)
... skipping 96 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 70 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":27,"skipped":128,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:13:50.400: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 166 lines ...
• [SLOW TEST:16.146 seconds]
[k8s.io] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":142,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-instrumentation] MetricsGrabber
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:13:50.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-3317" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":19,"skipped":116,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes:vsphere
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 13 lines ...
Jan 17 00:13:51.644: INFO: AfterEach: Cleaning up test resources


S [SKIPPING] in Spec Setup (BeforeEach) [1.224 seconds]
[sig-storage] PersistentVolumes:vsphere
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on vspehre volume detach [BeforeEach]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:163

  Only supported for providers [vsphere] (not gce)

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:63
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":11,"skipped":56,"failed":0}
[BeforeEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:13:17.551: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-5294
... skipping 36 lines ...
• [SLOW TEST:40.789 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if deleteOptions.OrphanDependents is nil
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:438
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil","total":-1,"completed":12,"skipped":56,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:13:58.348: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver hostPath doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":14,"skipped":63,"failed":0}
[BeforeEach] [k8s.io] [sig-node] Mount propagation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:11:54.835: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename mount-propagation
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in mount-propagation-5636
... skipping 61 lines ...
Jan 17 00:12:54.272: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:12:55.523: INFO: Exec stderr: ""
Jan 17 00:12:58.332: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-5636"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-5636"/host; echo host > "/var/lib/kubelet/mount-propagation-5636"/host/file] Namespace:mount-propagation-5636 PodName:hostexec-bootstrap-e2e-minion-group-zzr9-qhx58 ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jan 17 00:12:58.332: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:12:59.532: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-5636 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 00:12:59.533: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:13:01.583: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
Jan 17 00:13:01.939: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-5636 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 00:13:01.940: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:13:03.989: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jan 17 00:13:04.209: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-5636 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 00:13:04.209: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:13:05.657: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jan 17 00:13:05.892: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-5636 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 00:13:05.892: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:13:07.456: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jan 17 00:13:07.799: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-5636 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 00:13:07.799: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:13:10.092: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil>
Jan 17 00:13:10.226: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-5636 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 00:13:10.226: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:13:11.419: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil>
Jan 17 00:13:11.732: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-5636 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 00:13:11.732: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:13:12.415: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil>
Jan 17 00:13:12.530: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-5636 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 00:13:12.530: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:13:13.974: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jan 17 00:13:14.342: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-5636 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 00:13:14.342: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:13:15.772: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jan 17 00:13:15.884: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-5636 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 00:13:15.884: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:13:17.525: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil>
Jan 17 00:13:18.018: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-5636 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 00:13:18.018: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:13:19.681: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Jan 17 00:13:20.276: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-5636 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 00:13:20.276: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:13:21.797: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jan 17 00:13:21.974: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-5636 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 00:13:21.974: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:13:23.292: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil>
Jan 17 00:13:23.479: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-5636 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 00:13:23.479: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:13:24.604: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jan 17 00:13:24.734: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-5636 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 00:13:24.734: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:13:27.305: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Jan 17 00:13:27.671: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-5636 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 00:13:27.671: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:13:28.474: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Jan 17 00:13:28.587: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-5636 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 00:13:28.588: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:13:29.259: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jan 17 00:13:29.345: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-5636 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 00:13:29.345: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:13:30.546: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jan 17 00:13:30.804: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-5636 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 00:13:30.804: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:13:32.058: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil>
Jan 17 00:13:32.394: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-5636 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 00:13:32.394: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:13:34.611: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Jan 17 00:13:34.611: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-5636"/master/file` = master] Namespace:mount-propagation-5636 PodName:hostexec-bootstrap-e2e-minion-group-zzr9-qhx58 ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jan 17 00:13:34.611: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:13:36.064: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-5636"/slave/file] Namespace:mount-propagation-5636 PodName:hostexec-bootstrap-e2e-minion-group-zzr9-qhx58 ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jan 17 00:13:36.064: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 00:13:39.306: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-5636"/host] Namespace:mount-propagation-5636 PodName:hostexec-bootstrap-e2e-minion-group-zzr9-qhx58 ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jan 17 00:13:39.306: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 21 lines ...
• [SLOW TEST:126.796 seconds]
[k8s.io] [sig-node] Mount propagation
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should propagate mounts to the host
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":15,"skipped":63,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:14:01.635: INFO: Only supported for providers [aws] (not gce)
... skipping 122 lines ...
• [SLOW TEST:80.286 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  iterative rollouts should eventually progress
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:113
------------------------------
{"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":22,"skipped":90,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:14:06.051: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 83 lines ...
• [SLOW TEST:28.226 seconds]
[sig-storage] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":70,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:14:07.002: INFO: Only supported for providers [vsphere] (not gce)
... skipping 37 lines ...
Jan 17 00:13:39.064: INFO: Waiting for PV local-pvx5cvj to bind to PVC pvc-zqnvk
Jan 17 00:13:39.064: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-zqnvk] to have phase Bound
Jan 17 00:13:39.270: INFO: PersistentVolumeClaim pvc-zqnvk found but phase is Pending instead of Bound.
Jan 17 00:13:41.350: INFO: PersistentVolumeClaim pvc-zqnvk found and phase=Bound (2.28586043s)
Jan 17 00:13:41.350: INFO: Waiting up to 3m0s for PersistentVolume local-pvx5cvj to have phase Bound
Jan 17 00:13:41.437: INFO: PersistentVolume local-pvx5cvj found and phase=Bound (87.396882ms)
[It] should fail scheduling due to different NodeAffinity
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:360
STEP: local-volume-type: dir
STEP: Initializing test volumes
Jan 17 00:13:41.864: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c005460a-fe88-4a21-abe9-d0846b971ea7] Namespace:persistent-local-volumes-test-5605 PodName:hostexec-bootstrap-e2e-minion-group-6tqd-8zt9g ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jan 17 00:13:41.864: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Creating local PVCs and PVs
... skipping 30 lines ...

• [SLOW TEST:44.184 seconds]
[sig-storage] PersistentVolumes-local 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:338
    should fail scheduling due to different NodeAffinity
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:360
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":17,"skipped":100,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:14:07.407: INFO: Only supported for providers [aws] (not gce)
... skipping 141 lines ...
• [SLOW TEST:17.803 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":19,"skipped":150,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:14:08.383: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:14:08.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 103 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":17,"skipped":76,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 28 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":92,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:14:11.552: INFO: Only supported for providers [aws] (not gce)
... skipping 49 lines ...
• [SLOW TEST:14.169 seconds]
[k8s.io] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow substituting values in a volume subpath [sig-storage]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:161
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage]","total":-1,"completed":16,"skipped":67,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:13:48.223: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-2617
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to exceed backoffLimit
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:233
STEP: Creating a job
STEP: Ensuring job exceed backofflimit
STEP: Checking that 2 pod created and status is failed
[AfterEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:14:15.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2617" for this suite.


• [SLOW TEST:27.588 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should fail to exceed backoffLimit
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:233
------------------------------
[BeforeEach] [sig-api-machinery] Aggregator
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:12:59.407: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 42 lines ...
• [SLOW TEST:77.253 seconds]
[sig-api-machinery] Aggregator
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":19,"skipped":103,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:14:16.665: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 106 lines ...
Jan 17 00:13:08.058: INFO: PersistentVolumeClaim csi-hostpath59ghl found but phase is Pending instead of Bound.
Jan 17 00:13:10.217: INFO: PersistentVolumeClaim csi-hostpath59ghl found but phase is Pending instead of Bound.
Jan 17 00:13:12.362: INFO: PersistentVolumeClaim csi-hostpath59ghl found but phase is Pending instead of Bound.
Jan 17 00:13:14.715: INFO: PersistentVolumeClaim csi-hostpath59ghl found and phase=Bound (16.395003512s)
STEP: Expanding non-expandable pvc
Jan 17 00:13:15.070: INFO: currentPvcSize {{1048576 0} {<nil>} 1Mi BinarySI}, newSize {{1074790400 0} {<nil>}  BinarySI}
Jan 17 00:13:15.253: INFO: Error updating pvc csi-hostpath59ghl: persistentvolumeclaims "csi-hostpath59ghl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 00:13:18.012: INFO: Error updating pvc csi-hostpath59ghl: persistentvolumeclaims "csi-hostpath59ghl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 00:13:19.857: INFO: Error updating pvc csi-hostpath59ghl: persistentvolumeclaims "csi-hostpath59ghl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 00:13:21.818: INFO: Error updating pvc csi-hostpath59ghl: persistentvolumeclaims "csi-hostpath59ghl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 00:13:23.757: INFO: Error updating pvc csi-hostpath59ghl: persistentvolumeclaims "csi-hostpath59ghl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 00:13:25.697: INFO: Error updating pvc csi-hostpath59ghl: persistentvolumeclaims "csi-hostpath59ghl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 00:13:27.662: INFO: Error updating pvc csi-hostpath59ghl: persistentvolumeclaims "csi-hostpath59ghl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 00:13:29.529: INFO: Error updating pvc csi-hostpath59ghl: persistentvolumeclaims "csi-hostpath59ghl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 00:13:31.553: INFO: Error updating pvc csi-hostpath59ghl: persistentvolumeclaims "csi-hostpath59ghl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 00:13:34.561: INFO: Error updating pvc csi-hostpath59ghl: persistentvolumeclaims "csi-hostpath59ghl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 00:13:35.587: INFO: Error updating pvc csi-hostpath59ghl: persistentvolumeclaims "csi-hostpath59ghl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 00:13:37.525: INFO: Error updating pvc csi-hostpath59ghl: persistentvolumeclaims "csi-hostpath59ghl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 00:13:39.726: INFO: Error updating pvc csi-hostpath59ghl: persistentvolumeclaims "csi-hostpath59ghl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 00:13:41.445: INFO: Error updating pvc csi-hostpath59ghl: persistentvolumeclaims "csi-hostpath59ghl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 00:13:43.756: INFO: Error updating pvc csi-hostpath59ghl: persistentvolumeclaims "csi-hostpath59ghl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 00:13:45.765: INFO: Error updating pvc csi-hostpath59ghl: persistentvolumeclaims "csi-hostpath59ghl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 00:13:46.477: INFO: Error updating pvc csi-hostpath59ghl: persistentvolumeclaims "csi-hostpath59ghl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Jan 17 00:13:46.477: INFO: Deleting PersistentVolumeClaim "csi-hostpath59ghl"
Jan 17 00:13:46.890: INFO: Waiting up to 5m0s for PersistentVolume pvc-00580fdf-07a0-41d1-8f43-b69b0de9d774 to get deleted
Jan 17 00:13:47.264: INFO: PersistentVolume pvc-00580fdf-07a0-41d1-8f43-b69b0de9d774 found and phase=Bound (374.708478ms)
Jan 17 00:13:52.530: INFO: PersistentVolume pvc-00580fdf-07a0-41d1-8f43-b69b0de9d774 was removed
STEP: Deleting sc
... skipping 43 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] volume-expand
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:147
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":26,"skipped":163,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-apps] CronJob
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:127.642 seconds]
[sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete successful finished jobs with limit of one successful job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:234
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete successful finished jobs with limit of one successful job","total":-1,"completed":13,"skipped":95,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:14:20.804: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:14:20.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 121 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] provisioning
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should provision storage with mount options
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:173
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":22,"skipped":137,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 53 lines ...
• [SLOW TEST:24.799 seconds]
[sig-apps] ReplicaSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a private image
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/replica_set.go:97
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a private image","total":-1,"completed":13,"skipped":68,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:14:23.160: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:14:23.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 127 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":24,"skipped":139,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:14:28.052: INFO: Driver hostPath doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:14:28.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 104 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":15,"skipped":74,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 142 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":23,"skipped":89,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:14:28.791: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 13 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377

      Driver local doesn't support InlineVolume -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":155,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:14:21.676: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-6709
... skipping 20 lines ...
• [SLOW TEST:12.133 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":155,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:14:33.823: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 75 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:444
    that expects a client request
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:445
      should support a client that connects, sends DATA, and disconnects
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:449
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":25,"skipped":153,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 70 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":19,"skipped":124,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 67 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":17,"skipped":135,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:14:37.638: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 73 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run CronJob
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:224
    should create a CronJob
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:237
------------------------------
{"msg":"PASSED [sig-cli] Kubectl alpha client Kubectl run CronJob should create a CronJob","total":-1,"completed":22,"skipped":160,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:14:39.010: INFO: Driver gluster doesn't support ntfs -- skipping
... skipping 86 lines ...
• [SLOW TEST:20.456 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: too few pods, absolute => should not allow an eviction
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:151
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, absolute =\u003e should not allow an eviction","total":-1,"completed":14,"skipped":71,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
... skipping 40 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":22,"skipped":105,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:14:45.167: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 89 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":20,"skipped":119,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:14:45.646: INFO: Only supported for providers [aws] (not gce)
... skipping 141 lines ...
• [SLOW TEST:8.577 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":170,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:14:47.611: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 146 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":145,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:14:51.606: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 63 lines ...
      Driver vsphere doesn't support ext3 -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:153
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-apps] Job should fail to exceed backoffLimit","total":-1,"completed":27,"skipped":107,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:14:15.813: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-9712
... skipping 77 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392

      Only supported for providers [vsphere] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1383
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":28,"skipped":107,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes:vsphere
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:14:51.738: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename pv
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pv-7620
... skipping 19 lines ...
  Only supported for providers [vsphere] (not gce)

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:63
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":12,"skipped":96,"failed":0}
[BeforeEach] [sig-apps] StatefulSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:07:11.109: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-9145
... skipping 145 lines ...
• [SLOW TEST:5.471 seconds]
[k8s.io] Lease
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  lease API should be available [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":-1,"completed":26,"skipped":162,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:14:58.178: INFO: Driver cinder doesn't support ntfs -- skipping
... skipping 187 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":22,"skipped":95,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:15:06.767: INFO: Driver local doesn't support ext3 -- skipping
... skipping 282 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 17 00:14:37.816: INFO: File wheezy_udp@dns-test-service-3.dns-5041.svc.cluster.local from pod  dns-5041/dns-test-83cc20b0-e3a6-4b61-9c83-801d7627c6e0 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 17 00:14:38.208: INFO: File jessie_udp@dns-test-service-3.dns-5041.svc.cluster.local from pod  dns-5041/dns-test-83cc20b0-e3a6-4b61-9c83-801d7627c6e0 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 17 00:14:38.208: INFO: Lookups using dns-5041/dns-test-83cc20b0-e3a6-4b61-9c83-801d7627c6e0 failed for: [wheezy_udp@dns-test-service-3.dns-5041.svc.cluster.local jessie_udp@dns-test-service-3.dns-5041.svc.cluster.local]

Jan 17 00:14:43.984: INFO: File wheezy_udp@dns-test-service-3.dns-5041.svc.cluster.local from pod  dns-5041/dns-test-83cc20b0-e3a6-4b61-9c83-801d7627c6e0 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 17 00:14:45.122: INFO: File jessie_udp@dns-test-service-3.dns-5041.svc.cluster.local from pod  dns-5041/dns-test-83cc20b0-e3a6-4b61-9c83-801d7627c6e0 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 17 00:14:45.122: INFO: Lookups using dns-5041/dns-test-83cc20b0-e3a6-4b61-9c83-801d7627c6e0 failed for: [wheezy_udp@dns-test-service-3.dns-5041.svc.cluster.local jessie_udp@dns-test-service-3.dns-5041.svc.cluster.local]

Jan 17 00:14:48.587: INFO: File wheezy_udp@dns-test-service-3.dns-5041.svc.cluster.local from pod  dns-5041/dns-test-83cc20b0-e3a6-4b61-9c83-801d7627c6e0 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 17 00:14:48.843: INFO: File jessie_udp@dns-test-service-3.dns-5041.svc.cluster.local from pod  dns-5041/dns-test-83cc20b0-e3a6-4b61-9c83-801d7627c6e0 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 17 00:14:48.843: INFO: Lookups using dns-5041/dns-test-83cc20b0-e3a6-4b61-9c83-801d7627c6e0 failed for: [wheezy_udp@dns-test-service-3.dns-5041.svc.cluster.local jessie_udp@dns-test-service-3.dns-5041.svc.cluster.local]

Jan 17 00:14:53.402: INFO: DNS probes using dns-test-83cc20b0-e3a6-4b61-9c83-801d7627c6e0 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5041.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5041.svc.cluster.local; sleep 1; done
... skipping 2 lines ...

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 17 00:15:00.686: INFO: File jessie_udp@dns-test-service-3.dns-5041.svc.cluster.local from pod  dns-5041/dns-test-0374266b-77cd-4e5d-99d9-9add5505e588 contains '' instead of '10.0.165.80'
Jan 17 00:15:00.686: INFO: Lookups using dns-5041/dns-test-0374266b-77cd-4e5d-99d9-9add5505e588 failed for: [jessie_udp@dns-test-service-3.dns-5041.svc.cluster.local]

Jan 17 00:15:06.086: INFO: DNS probes using dns-test-0374266b-77cd-4e5d-99d9-9add5505e588 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
... skipping 5 lines ...
• [SLOW TEST:56.284 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":18,"skipped":77,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:15:07.440: INFO: Only supported for providers [aws] (not gce)
... skipping 189 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":27,"skipped":122,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:15:11.142: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:15:11.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 43 lines ...
• [SLOW TEST:9.972 seconds]
[sig-storage] Projected configMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":86,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:57.152 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":14,"skipped":104,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":16,"skipped":113,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:13:18.852: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in csi-mock-volumes-2455
... skipping 40 lines ...
Jan 17 00:13:37.304: INFO: PersistentVolumeClaim pvc-689qm found but phase is Pending instead of Bound.
Jan 17 00:13:39.670: INFO: PersistentVolumeClaim pvc-689qm found but phase is Pending instead of Bound.
Jan 17 00:13:41.770: INFO: PersistentVolumeClaim pvc-689qm found but phase is Pending instead of Bound.
Jan 17 00:13:43.999: INFO: PersistentVolumeClaim pvc-689qm found but phase is Pending instead of Bound.
Jan 17 00:13:46.368: INFO: PersistentVolumeClaim pvc-689qm found and phase=Bound (18.407922493s)
STEP: checking for CSIInlineVolumes feature
Jan 17 00:14:23.838: INFO: Error getting logs for pod csi-inline-volume-md2l8: the server rejected our request for an unknown reason (get pods csi-inline-volume-md2l8)
STEP: Deleting pod csi-inline-volume-md2l8 in namespace csi-mock-volumes-2455
STEP: Deleting the previously created pod
Jan 17 00:14:34.741: INFO: Deleting pod "pvc-volume-tester-5n5k4" in namespace "csi-mock-volumes-2455"
Jan 17 00:14:35.000: INFO: Wait up to 5m0s for pod "pvc-volume-tester-5n5k4" to be fully deleted
STEP: Checking CSI driver logs
Jan 17 00:15:06.033: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2455","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2455","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-2455","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2455","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-d653f186-a02b-4a35-a5a5-f8ffbe777948","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-d653f186-a02b-4a35-a5a5-f8ffbe777948"}}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-2455","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d653f186-a02b-4a35-a5a5-f8ffbe777948","storage.kubernetes.io/csiProvisionerIdentity":"1579220024218-8081-csi-mock-csi-mock-volumes-2455"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-2455","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d653f186-a02b-4a35-a5a5-f8ffbe777948","storage.kubernetes.io/csiProvisionerIdentity":"1579220024218-8081-csi-mock-csi-mock-volumes-2455"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d653f186-a02b-4a35-a5a5-f8ffbe777948/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d653f186-a02b-4a35-a5a5-f8ffbe777948","storage.kubernetes.io/csiProvisionerIdentity":"1579220024218-8081-csi-mock-csi-mock-volumes-2455"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d653f186-a02b-4a35-a5a5-f8ffbe777948/globalmount","target_path":"/var/lib/kubelet/pods/9ca1cce3-9980-4b2e-a888-c0b08724c260/volumes/kubernetes.io~csi/pvc-d653f186-a02b-4a35-a5a5-f8ffbe777948/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"pvc-volume-tester-5n5k4","csi.storage.k8s.io/pod.namespace":"csi-mock-volumes-2455","csi.storage.k8s.io/pod.uid":"9ca1cce3-9980-4b2e-a888-c0b08724c260","csi.storage.k8s.io/serviceAccount.name":"default","name":"pvc-d653f186-a02b-4a35-a5a5-f8ffbe777948","storage.kubernetes.io/csiProvisionerIdentity":"1579220024218-8081-csi-mock-csi-mock-volumes-2455"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/9ca1cce3-9980-4b2e-a888-c0b08724c260/volumes/kubernetes.io~csi/pvc-d653f186-a02b-4a35-a5a5-f8ffbe777948/mount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d653f186-a02b-4a35-a5a5-f8ffbe777948/globalmount"},"Response":{},"Error":""}

Jan 17 00:15:06.033: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Jan 17 00:15:06.033: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-5n5k4
Jan 17 00:15:06.033: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-2455
Jan 17 00:15:06.033: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 9ca1cce3-9980-4b2e-a888-c0b08724c260
Jan 17 00:15:06.033: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false
... skipping 79 lines ...
Jan 17 00:15:02.627: INFO: Trying to get logs from node bootstrap-e2e-minion-group-6tqd pod exec-volume-test-inlinevolume-w927 container exec-container-inlinevolume-w927: <nil>
STEP: delete the pod
Jan 17 00:15:03.185: INFO: Waiting for pod exec-volume-test-inlinevolume-w927 to disappear
Jan 17 00:15:03.315: INFO: Pod exec-volume-test-inlinevolume-w927 no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-w927
Jan 17 00:15:03.315: INFO: Deleting pod "exec-volume-test-inlinevolume-w927" in namespace "volume-5815"
Jan 17 00:15:05.049: INFO: error deleting PD "bootstrap-e2e-1b845310-0542-44cc-aee6-34b915b717c5": googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-1b845310-0542-44cc-aee6-34b915b717c5' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-6tqd', resourceInUseByAnotherResource
Jan 17 00:15:05.049: INFO: Couldn't delete PD "bootstrap-e2e-1b845310-0542-44cc-aee6-34b915b717c5", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-1b845310-0542-44cc-aee6-34b915b717c5' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-6tqd', resourceInUseByAnotherResource
Jan 17 00:15:11.666: INFO: error deleting PD "bootstrap-e2e-1b845310-0542-44cc-aee6-34b915b717c5": googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-1b845310-0542-44cc-aee6-34b915b717c5' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-6tqd', resourceInUseByAnotherResource
Jan 17 00:15:11.666: INFO: Couldn't delete PD "bootstrap-e2e-1b845310-0542-44cc-aee6-34b915b717c5", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-1b845310-0542-44cc-aee6-34b915b717c5' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-6tqd', resourceInUseByAnotherResource
Jan 17 00:15:19.148: INFO: Successfully deleted PD "bootstrap-e2e-1b845310-0542-44cc-aee6-34b915b717c5".
Jan 17 00:15:19.148: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:15:19.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-5815" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":16,"skipped":77,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:15:19.736: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 20 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152

      Driver "nfs" does not provide raw block - skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:101
------------------------------
{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent","total":-1,"completed":17,"skipped":72,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:14:06.706: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 57 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":18,"skipped":72,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 89 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":20,"skipped":159,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:15:23.713: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 95 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":23,"skipped":94,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:15:23.768: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 121 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should create and stop a replication controller  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":23,"skipped":117,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:15:24.668: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 70 lines ...
• [SLOW TEST:8.801 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":105,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 87 lines ...
• [SLOW TEST:42.767 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":23,"skipped":112,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:15:27.948: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 62 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a container with runAsNonRoot
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:98
    should run with an image specified user ID
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:145
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":17,"skipped":113,"failed":0}
[BeforeEach] [sig-auth] Certificates API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:15:18.720: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename certificates
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in certificates-7496
... skipping 14 lines ...
• [SLOW TEST:13.880 seconds]
[sig-auth] Certificates API
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should support building a client with a CSR
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:39
------------------------------
{"msg":"PASSED [sig-auth] Certificates API should support building a client with a CSR","total":-1,"completed":18,"skipped":113,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":110,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:15:27.562: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-9039
... skipping 12 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    creating/deleting custom resource definition objects works  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":30,"skipped":110,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:15:32.840: INFO: Driver local doesn't support ext4 -- skipping
... skipping 39 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":18,"skipped":139,"failed":0}
[BeforeEach] [sig-apps] DisruptionController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:15:05.024: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename disruption
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-6456
... skipping 24 lines ...
• [SLOW TEST:28.397 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: enough pods, absolute => should allow an eviction
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:151
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, absolute =\u003e should allow an eviction","total":-1,"completed":19,"skipped":139,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:15:33.427: INFO: Only supported for providers [openstack] (not gce)
... skipping 214 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with readOnlyRootFilesystem
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:165
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":125,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
... skipping 64 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (ext3)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume","total":-1,"completed":17,"skipped":68,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs","total":-1,"completed":13,"skipped":96,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:14:55.458: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 59 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":14,"skipped":96,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:15:43.690: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 61 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:147

      Only supported for providers [azure] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1512
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":15,"skipped":72,"failed":0}
[BeforeEach] [sig-api-machinery] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:15:31.661: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-9333
... skipping 22 lines ...
• [SLOW TEST:12.689 seconds]
[sig-api-machinery] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34
  should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":72,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:15:44.353: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 156 lines ...
STEP: cleaning the environment after gcepd
Jan 17 00:15:25.892: INFO: Deleting pod "gcepd-client" in namespace "volume-6597"
Jan 17 00:15:26.281: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
STEP: Deleting pv and pvc
Jan 17 00:15:36.779: INFO: Deleting PersistentVolumeClaim "pvc-q28jt"
Jan 17 00:15:37.372: INFO: Deleting PersistentVolume "gcepd-dqdm5"
Jan 17 00:15:39.465: INFO: error deleting PD "bootstrap-e2e-3436c9b0-92dc-42c4-ae4f-44302546faac": googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-3436c9b0-92dc-42c4-ae4f-44302546faac' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-zzr9', resourceInUseByAnotherResource
Jan 17 00:15:39.465: INFO: Couldn't delete PD "bootstrap-e2e-3436c9b0-92dc-42c4-ae4f-44302546faac", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-3436c9b0-92dc-42c4-ae4f-44302546faac' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-zzr9', resourceInUseByAnotherResource
Jan 17 00:15:47.031: INFO: Successfully deleted PD "bootstrap-e2e-3436c9b0-92dc-42c4-ae4f-44302546faac".
Jan 17 00:15:47.031: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:15:47.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-6597" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":18,"skipped":68,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:15:49.033: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 65 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":21,"skipped":163,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:15:50.359: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 51 lines ...
• [SLOW TEST:16.925 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":118,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 39 lines ...
• [SLOW TEST:34.078 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":19,"skipped":75,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:15:55.426: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 202 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":24,"skipped":105,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:15:59.958: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 6 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 201 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":28,"skipped":148,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:16:02.242: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:16:02.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 108 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:419
    should not expand volume if resizingOnDriver=off, resizingOnSC=on
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:448
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on","total":-1,"completed":19,"skipped":105,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 161 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:525
    should support inline execution and attach
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:688
------------------------------
S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support inline execution and attach","total":-1,"completed":24,"skipped":174,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":17,"skipped":78,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:16:02.914: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 115 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":18,"skipped":115,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:16:03.569: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 60 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:56
  NFSv4
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:74
    should be mountable for NFSv4
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:75
------------------------------
{"msg":"PASSED [sig-storage] GCP Volumes NFSv4 should be mountable for NFSv4","total":-1,"completed":21,"skipped":143,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:16:05.027: INFO: Only supported for providers [aws] (not gce)
... skipping 209 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":24,"skipped":91,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:16:05.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9561" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":29,"skipped":152,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:16:05.703: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:16:05.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 105 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":27,"skipped":170,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:16:06.375: INFO: Only supported for providers [aws] (not gce)
... skipping 121 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":20,"skipped":126,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:16:09.640: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:16:09.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 125 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":27,"skipped":169,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 50 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":26,"skipped":155,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:16:22.250: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 133 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":25,"skipped":133,"failed":0}
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:16:07.231: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-7859
... skipping 23 lines ...
• [SLOW TEST:15.667 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":133,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:16:22.900: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:16:22.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 137 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":20,"skipped":87,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 36 lines ...
• [SLOW TEST:38.034 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":22,"skipped":167,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:16:28.410: INFO: Only supported for providers [openstack] (not gce)
... skipping 37 lines ...
      Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:688
------------------------------
SS
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":93,"failed":0}
[BeforeEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:16:13.397: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-4225
... skipping 84 lines ...
• [SLOW TEST:27.401 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":25,"skipped":93,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] PrivilegedPod [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 64 lines ...
[sig-storage] CSI Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver "csi-hostpath" does not support topology - skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:96
------------------------------
... skipping 56 lines ...
Jan 17 00:16:11.268: INFO: Trying to get logs from node bootstrap-e2e-minion-group-zzr9 pod exec-volume-test-inlinevolume-f88v container exec-container-inlinevolume-f88v: <nil>
STEP: delete the pod
Jan 17 00:16:12.159: INFO: Waiting for pod exec-volume-test-inlinevolume-f88v to disappear
Jan 17 00:16:12.435: INFO: Pod exec-volume-test-inlinevolume-f88v no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-f88v
Jan 17 00:16:12.435: INFO: Deleting pod "exec-volume-test-inlinevolume-f88v" in namespace "volume-4565"
Jan 17 00:16:14.082: INFO: error deleting PD "bootstrap-e2e-2aadb4eb-ae5b-47e5-ab4c-9a15c3dc4c47": googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-2aadb4eb-ae5b-47e5-ab4c-9a15c3dc4c47' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-zzr9', resourceInUseByAnotherResource
Jan 17 00:16:14.082: INFO: Couldn't delete PD "bootstrap-e2e-2aadb4eb-ae5b-47e5-ab4c-9a15c3dc4c47", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-2aadb4eb-ae5b-47e5-ab4c-9a15c3dc4c47' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-zzr9', resourceInUseByAnotherResource
Jan 17 00:16:20.433: INFO: error deleting PD "bootstrap-e2e-2aadb4eb-ae5b-47e5-ab4c-9a15c3dc4c47": googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-2aadb4eb-ae5b-47e5-ab4c-9a15c3dc4c47' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-zzr9', resourceInUseByAnotherResource
Jan 17 00:16:20.433: INFO: Couldn't delete PD "bootstrap-e2e-2aadb4eb-ae5b-47e5-ab4c-9a15c3dc4c47", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-2aadb4eb-ae5b-47e5-ab4c-9a15c3dc4c47' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-zzr9', resourceInUseByAnotherResource
Jan 17 00:16:26.796: INFO: error deleting PD "bootstrap-e2e-2aadb4eb-ae5b-47e5-ab4c-9a15c3dc4c47": googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-2aadb4eb-ae5b-47e5-ab4c-9a15c3dc4c47' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-zzr9', resourceInUseByAnotherResource
Jan 17 00:16:26.796: INFO: Couldn't delete PD "bootstrap-e2e-2aadb4eb-ae5b-47e5-ab4c-9a15c3dc4c47", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-2aadb4eb-ae5b-47e5-ab4c-9a15c3dc4c47' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-zzr9', resourceInUseByAnotherResource
Jan 17 00:16:34.229: INFO: Successfully deleted PD "bootstrap-e2e-2aadb4eb-ae5b-47e5-ab4c-9a15c3dc4c47".
Jan 17 00:16:34.229: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:16:34.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-4565" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (ext3)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume","total":-1,"completed":31,"skipped":116,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:16:34.701: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 15 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":15,"skipped":106,"failed":0}
[BeforeEach] [k8s.io] [sig-node] PreStop
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:15:56.146: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in prestop-5825
... skipping 33 lines ...
• [SLOW TEST:38.592 seconds]
[k8s.io] [sig-node] PreStop
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should call prestop when killing a pod  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":16,"skipped":106,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:16:34.741: INFO: Driver emptydir doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:16:34.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 180 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":20,"skipped":124,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Volume Placement
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 45 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget","total":-1,"completed":27,"skipped":172,"failed":0}
[BeforeEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:16:26.312: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-7447
... skipping 13 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a container with runAsNonRoot
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:98
    should not run with an explicit root user ID [LinuxOnly]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:133
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":28,"skipped":172,"failed":0}

SSSSS
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":93,"failed":0}
[BeforeEach] [sig-storage] PV Protection
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:16:29.527: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename pv-protection
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pv-protection-4680
... skipping 26 lines ...
• [SLOW TEST:11.098 seconds]
[sig-storage] PV Protection
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify "immediate" deletion of a PV that is not bound to a PVC
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:98
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":20,"skipped":93,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:16:40.637: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 124 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and read from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":30,"skipped":154,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:16:45.538: INFO: Distro gci doesn't support ntfs -- skipping
... skipping 37 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":21,"skipped":137,"failed":0}
[BeforeEach] version v1
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:16:33.273: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-4201
... skipping 71 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":-1,"completed":22,"skipped":137,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:16:47.417: INFO: Driver local doesn't support ntfs -- skipping
... skipping 62 lines ...
• [SLOW TEST:12.442 seconds]
[k8s.io] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should get a host IP [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":127,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:16:49.628: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:16:49.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 223 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":17,"skipped":83,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:16:51.192: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 63 lines ...
• [SLOW TEST:12.441 seconds]
[sig-instrumentation] MetricsGrabber
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/common/framework.go:23
  should grab all metrics from API server.
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/monitoring/metrics_grabber.go:46
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","total":-1,"completed":29,"skipped":177,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 00:16:51.418: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 00:16:51.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 94 lines ...
• [SLOW TEST:16.041 seconds]
[sig-storage] Projected configMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:108
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":32,"skipped":126,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 65 lines ...
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":20,"skipped":158,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
... skipping 2 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 00:16:34.747: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in topology-4150
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191
Jan 17 00:16:37.250: INFO: found topology map[failure-domain.beta.kubernetes.io/zone:us-west1-b]
Jan 17 00:16:38.142: INFO: Node name not specified for getVolumeOpCounts, falling back to listing nodes from API Server
Jan 17 00:16:46.077: INFO: Node name not specified for getVolumeOpCounts, falling back to listing nodes from API Server
Jan 17 00:16:56.196: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
... skipping 9 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Not enough topologies in cluster -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:198
------------------------------
... skipping 59 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Only supported for providers [azure] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1512
------------------------------
... skipping 65 lines ...
STEP: Creating a pod to test atomic-volume-subpath
Jan 17 00:16:28.909: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-ff9l" in namespace "provisioning-2953" to be "success or failure"
Jan 17 00:16:29.168: INFO: Pod "pod-subpath-test-inlinevolume-ff9l": Phase="Pending", Reason="", readiness=false. Elapsed: 259.381995ms
Jan 17 00:16:31.937: INFO: Pod "pod-subpath-test-inlinevolume-ff9l": Phase="Pending", Reason="", readiness=false. Elapsed: 3.028234678s
Jan 17 00:16:34.145: INFO: Pod "pod-subpath-test-inlinevolume-ff9l": Phase="Pending", Reason="", readiness=false. Elapsed: 5.235784716s
Jan 17 00:16:36.405: INFO: Pod "pod-subpath-test-inlinevolume-ff9l": Phase="Pending", Reason="", readiness=false. Elapsed: 7.496229353s
Jan 17 00:16:38.743: INFO: Pod "pod-subpath-test-inlinevolume-ff9l": Phase="Failed", Reason="", readiness=false. Elapsed: 9.834406318s
Jan 17 00:16:40.801: INFO: Output of node "bootstrap-e2e-minion-group-d58v" pod "pod-subpath-test-inlinevolume-ff9l" container "init-volume-inlinevolume-ff9l": 
Jan 17 00:16:41.664: INFO: Failed to get logs from node "bootstrap-e2e-minion-group-d58v" pod "pod-subpath-test-inlinevolume-ff9l" container "test-container-subpath-inlinevolume-ff9l": the server rejected our request for an unknown reason (get pods pod-subpath-test-inlinevolume-ff9l)
STEP: delete the pod
Jan 17 00:16:42.760: INFO: Waiting for pod pod-subpath-test-inlinevolume-ff9l to disappear
Jan 17 00:16:43.032: INFO: Pod pod-subpath-test-inlinevolume-ff9l no longer exists
Jan 17 00:16:43.032: FAIL: Unexpected error:
    <*errors.errorString | 0xc003b60840>: {
        s: "expected pod \"pod-subpath-test-inlinevolume-ff9l\" success: pod \"pod-subpath-test-inlinevolume-ff9l\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-17 00:16:29 +0000 UTC Reason:ContainersNotInitialized Message:containers with incomplete status: [init-volume-inlinevolume-ff9l]} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-17 00:16:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test-container-subpath-inlinevolume-ff9l]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-17 00:16:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test-container-subpath-inlinevolume-ff9l]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-17 00:16:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.5 PodIP:10.64.0.51 PodIPs:[{IP:10.64.0.51}] StartTime:2020-01-17 00:16:29 +0000 UTC InitContainerStatuses:[{Name:init-volume-inlinevolume-ff9l State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:ContainerCannotRun,Message:OCI runtime start failed: container process is already dead: unknown,StartedAt:2020-01-17 00:16:33 +0000 UTC,FinishedAt:2020-01-17 00:16:33 +0000 UTC,ContainerID:docker://3e5adc428ebe1b209b1a751b468bdad942f00f430b20d7f28a0eb4e3f95e8a1f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:busybox:1.29 ImageID:docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 ContainerID:docker://3e5adc428ebe1b209b1a751b468bdad942f00f430b20d7f28a0eb4e3f95e8a1f Started:<nil>}] ContainerStatuses:[{Name:test-container-subpath-inlinevolume-ff9l State:{Waiting:&ContainerStateWaiting{Reason:PodInitializing,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID: ContainerID: Started:0xc001cbaa8a}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
    }
    expected pod "pod-subpath-test-inlinevolume-ff9l" success: pod "pod-subpath-test-inlinevolume-ff9l" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-17 00:16:29 +0000 UTC Reason:ContainersNotInitialized Message:containers with incomplete status: [init-volume-inlinevolume-ff9l]} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-17 00:16:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test-container-subpath-inlinevolume-ff9l]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-17 00:16:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test-container-subpath-inlinevolume-ff9l]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-17 00:16:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.5 PodIP:10.64.0.51 PodIPs:[{IP:10.64.0.51}] StartTime:2020-01-17 00:16:29 +0000 UTC InitContainerStatuses:[{Name:init-volume-inlinevolume-ff9l State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:ContainerCannotRun,Message:OCI runtime start failed: container process is already dead: unknown,StartedAt:2020-01-17 00:16:33 +0000 UTC,FinishedAt:2020-01-17 00:16:33 +0000 UTC,ContainerID:docker://3e5adc428ebe1b209b1a751b468bdad942f00f430b20d7f28a0eb4e3f95e8a1f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:busybox:1.29 ImageID:docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 ContainerID:docker://3e5adc428ebe1b209b1a751b468bdad942f00f430b20d7f28a0eb4e3f95e8a1f Started:<nil>}] ContainerStatuses:[{Name:test-container-subpath-inlinevolume-ff9l State:{Waiting:&ContainerStateWaiting{Reason:PodInitializing,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID: ContainerID: Started:0xc001cbaa8a}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc0016fa500, 0x49c78fe, 0x15, 0xc000ff5c00, 0x0, 0xc00382f1f0, 0x1, 0x1, 0x4b720e8)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:829 +0x1ee
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
... skipping 19 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "provisioning-2953".
STEP: Found 4 events.
Jan 17 00:16:43.943: INFO: At 2020-01-17 00:16:29 +0000 UTC - event for pod-subpath-test-inlinevolume-ff9l: {default-scheduler } Scheduled: Successfully assigned provisioning-2953/pod-subpath-test-inlinevolume-ff9l to bootstrap-e2e-minion-group-d58v
Jan 17 00:16:43.943: INFO: At 2020-01-17 00:16:33 +0000 UTC - event for pod-subpath-test-inlinevolume-ff9l: {kubelet bootstrap-e2e-minion-group-d58v} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine
Jan 17 00:16:43.943: INFO: At 2020-01-17 00:16:33 +0000 UTC - event for pod-subpath-test-inlinevolume-ff9l: {kubelet bootstrap-e2e-minion-group-d58v} Created: Created container init-volume-inlinevolume-ff9l
Jan 17 00:16:43.943: INFO: At 2020-01-17 00:16:35 +0000 UTC - event for pod-subpath-test-inlinevolume-ff9l: {kubelet bootstrap-e2e-minion-group-d58v} Failed: Error: failed to start container "init-volume-inlinevolume-ff9l": Error response from daemon: OCI runtime start failed: container process is already dead: unknown
Jan 17 00:16:44.331: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jan 17 00:16:44.331: INFO: 
Jan 17 00:16:44.670: INFO: 
Logging node info for node bootstrap-e2e-master
Jan 17 00:16:45.012: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master   /api/v1/nodes/bootstrap-e2e-master 296a0004-616f-4e3d-858c-47c88bfa7c15 31856 0 2020-01-16 23:55:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.64.5.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-soak-1-5/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.5.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3876818944 0} {<nil>} 3785956Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3614674944 0} {<nil>} 3529956Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-16 23:55:26 +0000 UTC,LastTransitionTime:2020-01-16 23:55:26 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-17 00:16:07 +0000 UTC,LastTransitionTime:2020-01-16 23:55:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-17 00:16:07 +0000 UTC,LastTransitionTime:2020-01-16 23:55:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-17 00:16:07 +0000 UTC,LastTransitionTime:2020-01-16 23:55:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-17 00:16:07 +0000 UTC,LastTransitionTime:2020-01-16 23:55:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.203.169.247,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-gce-soak-1-5.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-gce-soak-1-5.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2c171df663f8eadee6cb26812d287cc4,SystemUUID:2c171df6-63f8-eade-e6cb-26812d287cc4,BootID:18f73bed-f1f9-47a1-aed9-c81ee589685d,KernelVersion:4.19.76+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://19.3.1,KubeletVersion:v1.18.0-alpha.1.836+6413f1ee2be99f,KubeProxyVersion:v1.18.0-alpha.1.836+6413f1ee2be99f,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.18.0-alpha.1.836_6413f1ee2be99f],SizeBytes:214139260,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.18.0-alpha.1.836_6413f1ee2be99f],SizeBytes:204354909,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.18.0-alpha.1.836_6413f1ee2be99f],SizeBytes:114414862,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager:v9.0.2],SizeBytes:83076028,},ContainerImage{Names:[k8s.gcr.io/etcd-empty-dir-cleanup@sha256:484662e55e0705caed26c6fb8632097457f43ce685756531da7a76319a7dcee1 k8s.gcr.io/etcd-empty-dir-cleanup:3.4.3.0],SizeBytes:77408900,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-glbc-amd64@sha256:1a859d138b4874642e9a8709e7ab04324669c77742349b5b21b1ef8a25fef55f k8s.gcr.io/ingress-gce-glbc-amd64:v1.6.1],SizeBytes:76121176,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Jan 17 00:16:45.013: INFO: 
Logging kubelet events for node bootstrap-e2e-master
Jan 17 00:16:45.386: INFO: 
Logging pods the kubelet thinks is on node bootstrap-e2e-master
Jan 17 00:16:46.077: INFO: fluentd-gcp-v3.2.0-tz8kx started at 2020-01-16 23:55:31 +0000 UTC (0+2 container statuses recorded)
Jan 17 00:16:46.077: INFO: 	Container fluentd-gcp ready: true, restart count 0
... skipping 18 lines ...
Jan 17 00:16:46.077: INFO: l7-lb-controller-bootstrap-e2e-master started at 2020-01-16 23:55:13 +0000 UTC (0+1 container statuses recorded)
Jan 17 00:16:46.077: INFO: 	Container l7-lb-controller ready: true, restart count 2
Jan 17 00:16:47.947: INFO: 
Latency metrics for node bootstrap-e2e-master
Jan 17 00:16:47.947: INFO: 
Logging node info for node bootstrap-e2e-minion-group-6tqd
Jan 17 00:16:48.469: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6tqd   /api/v1/nodes/bootstrap-e2e-minion-group-6tqd d89f3480-f2b6-4ed2-9a27-61cf36f9894e 31618 0 2020-01-16 23:55:01 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6tqd kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-6tqd topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-1724":"bootstrap-e2e-minion-group-6tqd","csi-hostpath-provisioning-2968":"bootstrap-e2e-minion-group-6tqd","csi-hostpath-provisioning-5941":"bootstrap-e2e-minion-group-6tqd","csi-hostpath-provisioning-7292":"bootstrap-e2e-minion-group-6tqd","csi-mock-csi-mock-volumes-2325":"csi-mock-csi-mock-volumes-2325","csi-mock-csi-mock-volumes-6067":"csi-mock-csi-mock-volumes-6067","csi-mock-csi-mock-volumes-9828":"csi-mock-csi-mock-volumes-9828"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-soak-1-5/us-west1-b/bootstrap-e2e-minion-group-6tqd,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7840251904 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7578107904 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-17 00:15:10 +0000 UTC,LastTransitionTime:2020-01-16 23:55:06 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-17 00:15:10 +0000 UTC,LastTransitionTime:2020-01-16 23:55:06 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-17 00:15:10 +0000 UTC,LastTransitionTime:2020-01-16 23:55:06 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-17 00:15:10 +0000 UTC,LastTransitionTime:2020-01-16 23:55:06 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-17 00:15:10 +0000 UTC,LastTransitionTime:2020-01-16 23:55:06 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-17 00:15:10 +0000 UTC,LastTransitionTime:2020-01-16 23:55:06 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-17 00:15:10 +0000 UTC,LastTransitionTime:2020-01-16 23:55:06 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-16 23:55:18 +0000 UTC,LastTransitionTime:2020-01-16 23:55:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-17 00:15:51 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-17 00:15:51 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-17 00:15:51 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-17 00:15:51 +0000 UTC,LastTransitionTime:2020-01-16 23:55:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.20.215,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6tqd.c.k8s-gce-soak-1-5.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6tqd.c.k8s-gce-soak-1-5.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09f741c3c917602668994b60b2b21d29,SystemUUID:09f741c3-c917-6026-6899-4b60b2b21d29,BootID:f7e210a0-f617-4592-ba00-4f298f825fe1,KernelVersion:4.19.76+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://19.3.1,KubeletVersion:v1.18.0-alpha.1.836+6413f1ee2be99f,KubeProxyVersion:v1.18.0-alpha.1.836+6413f1ee2be99f,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:332011484,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[fedora@sha256:8fa60b88e2a7eac8460b9c0104b877f1aa0cea7fbc03c701b7e545dacccfb433 fedora:latest],SizeBytes:194281245,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.836_6413f1ee2be99f],SizeBytes:134166654,},ContainerImage{Names:[httpd@sha256:dfb792aa3fed0694d8361ec066c592381e9bdff508292eceedd1ce28fb020f71 httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:e10aab64506dd46c37e89ba9c962e34b0b1c91e498721b622f71af92199a06bc quay.io/k8scsi/csi-provisioner:v1.5.0],SizeBytes:47942457,},ContainerImage{Names:[quay.io/k8scsi/snapshot-controller@sha256:b3c1c484ffe4f0bbf000bda93fb745e1b3899b08d605a08581426a1963dd3e8a quay.io/k8scsi/snapshot-controller:v2.0.0-rc2],SizeBytes:47222712,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:8f4a5b1a78441cfc8a674555b772e39b91932985618fc4054d5e73d27bc45a72 quay.io/k8scsi/csi-snapshotter:v2.0.0],SizeBytes:46317165,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:e2dd4c337f1beccd298acbd7179565579c4645125057dfdcbbb5beef977b779e quay.io/k8scsi/csi-attacher:v2.1.0],SizeBytes:46131029,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:6883482c4c4bd0eabb83315b5a9e8d9c0f34357980489a679a56441ec74c24c9 quay.io/k8scsi/csi-resizer:v0.4.0],SizeBytes:46065348,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:2f80c87a67122448bd60c778dd26d4067e1f48d1ea3fefe9f92b8cd1961acfa0 quay.io/k8scsi/hostpathplugin:v1.3.0-rc1],SizeBytes:28736864,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:89cdb2a20bdec89b75e2fbd82a67567ea90b719524990e772f2704b19757188d quay.io/k8scsi/csi-node-driver-registrar:v1.2.0],SizeBytes:17057647,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[kubernetes.io/gce-pd/bootstrap-e2e-dynamic-pvc-6f60c1fd-9f80-4384-91ba-88c85a509a6a],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/gce-pd/bootstrap-e2e-dynamic-pvc-6f60c1fd-9f80-4384-91ba-88c85a509a6a,DevicePath:/dev/disk/by-id/google-bootstrap-e2e-dynamic-pvc-6f60c1fd-9f80-4384-91ba-88c85a509a6a,},},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Jan 17 00:16:48.470: INFO: 
Logging kubelet events for node bootstrap-e2e-minion-group-6tqd
Jan 17 00:16:49.463: INFO: 
Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6tqd
Jan 17 00:16:49.979: INFO: netserver-0 started at 2020-01-17 00:10:30 +0000 UTC (0+1 container statuses recorded)
Jan 17 00:16:49.979: INFO: 	Container webserver ready: true, restart count 0
... skipping 77 lines ...
Jan 17 00:16:49.980: INFO: ss2-2 started at 2020-01-17 00:16:35 +0000 UTC (0+1 container statuses recorded)
Jan 17 00:16:49.980: INFO: 	Container webserver ready: true, restart count 0
Jan 17 00:16:51.129: INFO: 
Latency metrics for node bootstrap-e2e-minion-group-6tqd
Jan 17 00:16:51.129: INFO: 
Logging node info for node bootstrap-e2e-minion-group-d58v
Jan 17 00:16:51.486: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-d58v   /api/v1/nodes/bootstrap-e2e-minion-group-d58v e1a2f9d4-9ebf-4290-a49b-b28a97d532fd 32427 0 2020-01-16 23:54:59 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-d58v kubernetes.io/os:linux mounted_volume_expand:mounted-volume-expand-2918 node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-d58v topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-726":"bootstrap-e2e-minion-group-d58v","csi-hostpath-provisioning-2251":"bootstrap-e2e-minion-group-d58v","csi-hostpath-provisioning-9079":"bootstrap-e2e-minion-group-d58v","csi-hostpath-provisioning-953":"bootstrap-e2e-minion-group-d58v","csi-hostpath-volume-expand-2959":"bootstrap-e2e-minion-group-d58v","csi-hostpath-volume-expand-4068":"bootstrap-e2e-minion-group-d58v","csi-hostpath-volume-expand-560":"bootstrap-e2e-minion-group-d58v","csi-hostpath-volume-expand-8136":"bootstrap-e2e-minion-group-d58v","csi-hostpath-volume-expand-9737":"bootstrap-e2e-minion-group-d58v","csi-hostpath-volumemode-7602":"bootstrap-e2e-minion-group-d58v","csi-mock-csi-mock-volumes-1821":"csi-mock-csi-mock-volumes-1821","csi-mock-csi-mock-volumes-2455":"csi-mock-csi-mock-volumes-2455","csi-mock-csi-mock-volumes-5764":"csi-mock-csi-mock-volumes-5764","csi-mock-csi-mock-volumes-8266":"csi-mock-csi-mock-volumes-8266","csi-mock-csi-mock-volumes-9275":"csi-mock-csi-mock-volumes-9275","csi-mock-csi-mock-volumes-9568":"csi-mock-csi-mock-volumes-9568","csi-mock-csi-mock-volumes-9758":"csi-mock-csi-mock-volumes-9758"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-soak-1-5/us-west1-b/bootstrap-e2e-minion-group-d58v,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7840251904 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7578107904 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-17 00:15:04 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-17 00:15:04 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-17 00:15:04 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-17 00:15:04 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-17 00:15:04 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-17 00:15:04 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-17 00:15:04 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-16 23:55:18 +0000 UTC,LastTransitionTime:2020-01-16 23:55:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-17 00:15:58 +0000 UTC,LastTransitionTime:2020-01-16 23:54:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-17 00:15:58 +0000 UTC,LastTransitionTime:2020-01-16 23:54:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-17 00:15:58 +0000 UTC,LastTransitionTime:2020-01-16 23:54:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-17 00:15:58 +0000 UTC,LastTransitionTime:2020-01-16 23:55:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.212.157,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-d58v.c.k8s-gce-soak-1-5.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-d58v.c.k8s-gce-soak-1-5.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:66efd28b4d9221b45cd8725ef4ad4073,SystemUUID:66efd28b-4d92-21b4-5cd8-725ef4ad4073,BootID:5f1fbc4a-0bd8-4473-a470-9042835a55c4,KernelVersion:4.19.76+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://19.3.1,KubeletVersion:v1.18.0-alpha.1.836+6413f1ee2be99f,KubeProxyVersion:v1.18.0-alpha.1.836+6413f1ee2be99f,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 gluster/glusterdynamic-provisioner:v1.0],SizeBytes:373281573,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:332011484,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[fedora@sha256:8fa60b88e2a7eac8460b9c0104b877f1aa0cea7fbc03c701b7e545dacccfb433 fedora:latest],SizeBytes:194281245,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.836_6413f1ee2be99f],SizeBytes:134166654,},ContainerImage{Names:[httpd@sha256:dfb792aa3fed0694d8361ec066c592381e9bdff508292eceedd1ce28fb020f71 httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/k8s-authenticated-test/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:e10aab64506dd46c37e89ba9c962e34b0b1c91e498721b622f71af92199a06bc quay.io/k8scsi/csi-provisioner:v1.5.0],SizeBytes:47942457,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:8f4a5b1a78441cfc8a674555b772e39b91932985618fc4054d5e73d27bc45a72 quay.io/k8scsi/csi-snapshotter:v2.0.0],SizeBytes:46317165,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:e2dd4c337f1beccd298acbd7179565579c4645125057dfdcbbb5beef977b779e quay.io/k8scsi/csi-attacher:v2.1.0],SizeBytes:46131029,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:6883482c4c4bd0eabb83315b5a9e8d9c0f34357980489a679a56441ec74c24c9 quay.io/k8scsi/csi-resizer:v0.4.0],SizeBytes:46065348,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:a2db01cfd2ae1a16f0feef274160c659c1ac5aa433e1c514de20e334cb66c674 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b k8s.gcr.io/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[k8s.gcr.io/addon-resizer@sha256:30b3b12e471c534949e12d2da958fdf33848d153f2a0a88565bdef7ca999b5ad k8s.gcr.io/addon-resizer:1.8.7],SizeBytes:37930718,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:2f80c87a67122448bd60c778dd26d4067e1f48d1ea3fefe9f92b8cd1961acfa0 quay.io/k8scsi/hostpathplugin:v1.3.0-rc1],SizeBytes:28736864,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:89cdb2a20bdec89b75e2fbd82a67567ea90b719524990e772f2704b19757188d quay.io/k8scsi/csi-node-driver-registrar:v1.2.0],SizeBytes:17057647,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64@sha256:d83d8a481145d0eb71f8bd71ae236d1c6a931dd3bdcaf80919a8ec4a4d8aff74 k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0],SizeBytes:13513083,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/apparmor-loader@sha256:1fdc224b826c4bc16b3cdf5c09d6e5b8c7aa77e2b2d81472a1316bd1606fa1bd gcr.io/kubernetes-e2e-test-images/apparmor-loader:1.0],SizeBytes:13090050,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[flexvolume-k8s/dummy-attachable-flexvolume-9532/flex-volume-0 kubernetes.io/gce-pd/bootstrap-e2e-dynamic-pvc-fa753728-8f53-4273-8341-542ac14d81c2],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/gce-pd/bootstrap-e2e-dynamic-pvc-fa753728-8f53-4273-8341-542ac14d81c2,DevicePath:/dev/disk/by-id/google-bootstrap-e2e-dynamic-pvc-fa753728-8f53-4273-8341-542ac14d81c2,},AttachedVolume{Name:flexvolume-k8s/dummy-attachable-flexvolume-9532/flex-volume-0,DevicePath:foo,},},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Jan 17 00:16:51.487: INFO: 
Logging kubelet events for node bootstrap-e2e-minion-group-d58v
Jan 17 00:16:51.715: INFO: 
Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-d58v
Jan 17 00:16:52.005: INFO: pod-subpath-test-preprovisionedpv-tqzx started at 2020-01-17 00:16:00 +0000 UTC (1+1 container statuses recorded)
Jan 17 00:16:52.005: INFO: 	Init container init-volume-preprovisionedpv-tqzx ready: true, restart count 0
... skipping 77 lines ...
Jan 17 00:16:52.005: INFO: run-test-3-w2446 started at 2020-01-17 00:15:28 +0000 UTC (0+1 container statuses recorded)
Jan 17 00:16:52.005: INFO: 	Container run-test-3 ready: true, restart count 0
Jan 17 00:16:53.873: INFO: 
Latency metrics for node bootstrap-e2e-minion-group-d58v
Jan 17 00:16:53.873: INFO: 
Logging node info for node bootstrap-e2e-minion-group-w9fq
Jan 17 00:16:54.007: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-w9fq   /api/v1/nodes/bootstrap-e2e-minion-group-w9fq 0b624a44-d3f1-427b-af22-60351d199746 32472 0 2020-01-16 23:55:00 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-w9fq kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-w9fq topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2930":"bootstrap-e2e-minion-group-w9fq","csi-hostpath-provisioning-6791":"bootstrap-e2e-minion-group-w9fq","csi-hostpath-volume-2564":"bootstrap-e2e-minion-group-w9fq","csi-hostpath-volumemode-868":"bootstrap-e2e-minion-group-w9fq","csi-mock-csi-mock-volumes-7456":"csi-mock-csi-mock-volumes-7456","csi-mock-csi-mock-volumes-8219":"csi-mock-csi-mock-volumes-8219"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-soak-1-5/us-west1-b/bootstrap-e2e-minion-group-w9fq,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7840251904 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7578107904 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-17 00:15:03 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-17 00:15:03 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-17 00:15:03 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-17 00:15:03 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-17 00:15:03 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-17 00:15:03 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-17 00:15:03 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-16 23:55:18 +0000 UTC,LastTransitionTime:2020-01-16 23:55:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-17 00:16:32 +0000 UTC,LastTransitionTime:2020-01-16 23:55:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-17 00:16:32 +0000 UTC,LastTransitionTime:2020-01-16 23:55:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-17 00:16:32 +0000 UTC,LastTransitionTime:2020-01-16 23:55:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-17 00:16:32 +0000 UTC,LastTransitionTime:2020-01-16 23:55:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.230.81.252,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-w9fq.c.k8s-gce-soak-1-5.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-w9fq.c.k8s-gce-soak-1-5.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c8faa56289e8b49395f6e042fce4ac99,SystemUUID:c8faa562-89e8-b493-95f6-e042fce4ac99,BootID:dd222877-27c5-4dc6-a0ec-f39130920f65,KernelVersion:4.19.76+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://19.3.1,KubeletVersion:v1.18.0-alpha.1.836+6413f1ee2be99f,KubeProxyVersion:v1.18.0-alpha.1.836+6413f1ee2be99f,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:332011484,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.836_6413f1ee2be99f],SizeBytes:134166654,},ContainerImage{Names:[httpd@sha256:dfb792aa3fed0694d8361ec066c592381e9bdff508292eceedd1ce28fb020f71 httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[k8s.gcr.io/fluentd-gcp-scaler@sha256:4f28f10fb89506768910b858f7a18ffb996824a16d70d5ac895e49687df9ff58 k8s.gcr.io/fluentd-gcp-scaler:0.5.2],SizeBytes:90498960,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/k8s-authenticated-test/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:e10aab64506dd46c37e89ba9c962e34b0b1c91e498721b622f71af92199a06bc quay.io/k8scsi/csi-provisioner:v1.5.0],SizeBytes:47942457,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:8f4a5b1a78441cfc8a674555b772e39b91932985618fc4054d5e73d27bc45a72 quay.io/k8scsi/csi-snapshotter:v2.0.0],SizeBytes:46317165,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:e2dd4c337f1beccd298acbd7179565579c4645125057dfdcbbb5beef977b779e quay.io/k8scsi/csi-attacher:v2.1.0],SizeBytes:46131029,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:6883482c4c4bd0eabb83315b5a9e8d9c0f34357980489a679a56441ec74c24c9 quay.io/k8scsi/csi-resizer:v0.4.0],SizeBytes:46065348,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b k8s.gcr.io/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[k8s.gcr.io/addon-resizer@sha256:30b3b12e471c534949e12d2da958fdf33848d153f2a0a88565bdef7ca999b5ad k8s.gcr.io/addon-resizer:1.8.7],SizeBytes:37930718,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:2f80c87a67122448bd60c778dd26d4067e1f48d1ea3fefe9f92b8cd1961acfa0 quay.io/k8scsi/hostpathplugin:v1.3.0-rc1],SizeBytes:28736864,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:89cdb2a20bdec89b75e2fbd82a67567ea90b719524990e772f2704b19757188d quay.io/k8scsi/csi-node-driver-registrar:v1.2.0],SizeBytes:17057647,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/apparmor-loader@sha256:1fdc224b826c4bc16b3cdf5c09d6e5b8c7aa77e2b2d81472a1316bd1606fa1bd gcr.io/kubernetes-e2e-test-images/apparmor-loader:1.0],SizeBytes:13090050,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:edafc0a0fb057813850d1ba44014914ca02d671ae247107ca70c94db686e7de6 busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:545e6a6310a27636260920bc07b994a299b6708a1b26910cfefd335fdfb60d2b k8s.gcr.io/busybox:1.27],SizeBytes:1129289,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[kubernetes.io/gce-pd/bootstrap-e2e-dynamic-pvc-a4c9c2f0-1348-4e4f-a42b-5213d17a6501],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/gce-pd/bootstrap-e2e-dynamic-pvc-a4c9c2f0-1348-4e4f-a42b-5213d17a6501,DevicePath:/dev/disk/by-id/google-bootstrap-e2e-dynamic-pvc-a4c9c2f0-1348-4e4f-a42b-5213d17a6501,},},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Jan 17 00:16:54.008: INFO: 
Logging kubelet events for node bootstrap-e2e-minion-group-w9fq
Jan 17 00:16:54.170: INFO: 
Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-w9fq
Jan 17 00:16:54.497: INFO: pod-0 started at 2020-01-17 00:14:27 +0000 UTC (0+1 container statuses recorded)
Jan 17 00:16:54.497: INFO: 	Container busybox ready: true, restart count 0
Jan 17 00:16:54.497: INFO: csi-hostpath-resizer-0 started at 2020-01-17 00:15:16 +0000 UTC (0+1 container statuses recorded)
Jan 17 00:16:54.497: INFO: 	Container csi-resizer ready: true, restart count 0
Jan 17 00:16:54.497: INFO: webserver-6f77d5bb-n2msr started at 2020-01-17 00:13:27 +0000 UTC (0+1 container statuses recorded)
Jan 17 00:16:54.497: INFO: 	Container httpd ready: true, restart count 0
Jan 17 00:16:54.497: INFO: fail-once-non-local-bqzh6 started at 2020-01-17 00:16:46 +0000 UTC (0+1 container statuses recorded)
Jan 17 00:16:54.497: INFO: 	Container c ready: false, restart count 0
Jan 17 00:16:54.497: INFO: busybox-privileged-true-cff842d6-ba46-4719-b1b1-d4456b0105ef started at 2020-01-17 00:12:33 +0000 UTC (0+1 container statuses recorded)
Jan 17 00:16:54.497: INFO: 	Container busybox-privileged-true-cff842d6-ba46-4719-b1b1-d4456b0105ef ready: false, restart count 0
Jan 17 00:16:54.497: INFO: netserver-2 started at 2020-01-17 00:10:30 +0000 UTC (0+1 container statuses recorded)
Jan 17 00:16:54.497: INFO: 	Container webserver ready: true, restart count 0
Jan 17 00:16:54.497: INFO: simpletest.rc-bllr7 started at 2020-01-17 00:14:27 +0000 UTC (0+1 container statuses recorded)
... skipping 19 lines ...
Jan 17 00:16:54.497: INFO: csi-hostpathplugin-0 started at 2020-01-17 00:15:13 +0000 UTC (0+3 container statuses recorded)
Jan 17 00:16:54.497: INFO: 	Container hostpath ready: true, restart count 0
Jan 17 00:16:54.497: INFO: 	Container liveness-probe ready: true, restart count 0
Jan 17 00:16:54.497: INFO: 	Container node-driver-registrar ready: true, restart count 0
Jan 17 00:16:54.497: INFO: external-provisioner-9ffdt started at 2020-01-17 00:15:52 +0000 UTC (0+1 container statuses recorded)
Jan 17 00:16:54.497: INFO: 	Container nfs-provisioner ready: true, restart count 0
Jan 17 00:16:54.497: INFO: fail-once-non-local-mcfcx started at 2020-01-17 00:16:50 +0000 UTC (0+1 container statuses recorded)
Jan 17 00:16:54.498: INFO: 	Container c ready: false, restart count 0
Jan 17 00:16:54.498: INFO: fail-once-non-local-qlfz4 started at 2020-01-17 00:16:51 +0000 UTC (0+1 container statuses recorded)
Jan 17 00:16:54.498: INFO: 	Container c ready: false, restart count 0
Jan 17 00:16:54.498: INFO: fluentd-gcp-v3.2.0-g8dmm started at 2020-01-16 23:56:20 +0000 UTC (0+2 container statuses recorded)
Jan 17 00:16:54.498: INFO: 	Container fluentd-gcp ready: true, restart count 0
Jan 17 00:16:54.498: INFO: 	Container prometheus-to-sd-exporter ready: true, restart count 0
Jan 17 00:16:54.498: INFO: run-test-flfwp started at 2020-01-17 00:15:09 +0000 UTC (0+1 container statuses recorded)
Jan 17 00:16:54.498: INFO: 	Container run-test ready: false, restart count 0
... skipping 6 lines ...
Jan 17 00:16:54.498: INFO: pod-hostip-84e1506b-9ce2-4221-88cf-62fcddce3582 started at 2020-01-17 00:16:43 +0000 UTC (0+1 container statuses recorded)
Jan 17 00:16:54.498: INFO: 	Container test ready: true, restart count 0
Jan 17 00:16:54.498: INFO: pod-handle-http-request started at 2020-01-17 00:13:42 +0000 UTC (0+1 container statuses recorded)
Jan 17 00:16:54.498: INFO: 	Container pod-handle-http-request ready: true, restart count 0
Jan 17 00:16:54.498: INFO: rs-7nx5w started at 2020-01-17 00:13:46 +0000 UTC (0+1 container statuses recorded)
Jan 17 00:16:54.498: INFO: 	Container busybox ready: true, restart count 0
Jan 17 00:16:54.498: INFO: fail-once-non-local-nmxw5 started at 2020-01-17 00:16:46 +0000 UTC (0+1 container statuses recorded)
Jan 17 00:16:54.498: INFO: 	Container c ready: false, restart count 0
Jan 17 00:16:54.498: INFO: webserver-6f77d5bb-45vt7 started at 2020-01-17 00:13:56 +0000 UTC (0+1 container statuses recorded)
Jan 17 00:16:54.498: INFO: 	Container httpd ready: true, restart count 0
Jan 17 00:16:54.498: INFO: simpletest.rc-dlw8t started at 2020-01-17 00:14:27 +0000 UTC (0+1 container statuses recorded)
Jan 17 00:16:54.498: INFO: 	Container nginx ready: true, restart count 0
Jan 17 00:16:54.498: INFO: metadata-proxy-v0.1-jf7n6 started at 2020-01-16 23:55:01 +0000 UTC (0+2 container statuses recorded)
... skipping 17 lines ...
Jan 17 00:16:54.498: INFO: gcepd-injector started at 2020-01-17 00:16:22 +0000 UTC (0+1 container statuses recorded)
Jan 17 00:16:54.498: INFO: 	Container gcepd-injector ready: true, restart count 0
Jan 17 00:16:55.674: INFO: 
Latency metrics for node bootstrap-e2e-minion-group-w9fq
Jan 17 00:16:55.675: INFO: 
Logging node info for node bootstrap-e2e-minion-group-zzr9
Jan 17 00:16:55.891: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-zzr9   /api/v1/nodes/bootstrap-e2e-minion-group-zzr9 295357e3-76cb-45ca-a522-763c2afc92e0 32715 0 2020-01-16 23:55:01 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-zzr9 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-zzr9 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-6080":"bootstrap-e2e-minion-group-zzr9","csi-hostpath-provisioning-1455":"bootstrap-e2e-minion-group-zzr9","csi-hostpath-volume-8675":"bootstrap-e2e-minion-group-zzr9","csi-hostpath-volume-expand-7076":"bootstrap-e2e-minion-group-zzr9","csi-mock-csi-mock-volumes-3713":"csi-mock-csi-mock-volumes-3713"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-soak-1-5/us-west1-b/bootstrap-e2e-minion-group-zzr9,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7840251904 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7578107904 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-17 00:15:04 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-17 00:15:04 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-17 00:15:04 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-17 00:15:04 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-17 00:15:04 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-17 00:15:04 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-17 00:15:04 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-16 23:55:18 +0000 UTC,LastTransitionTime:2020-01-16 23:55:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-17 00:16:39 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-17 00:16:39 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-17 00:16:39 +0000 UTC,LastTransitionTime:2020-01-16 23:55:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-17 00:16:39 +0000 UTC,LastTransitionTime:2020-01-16 23:55:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.6,},NodeAddress{Type:ExternalIP,Address:34.83.181.121,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-zzr9.c.k8s-gce-soak-1-5.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-zzr9.c.k8s-gce-soak-1-5.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b91faa950b08b063b3f8192ed24b3266,SystemUUID:b91faa95-0b08-b063-b3f8-192ed24b3266,BootID:c7dbedec-fc14-4eaf-9426-f36fcecccc95,KernelVersion:4.19.76+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://19.3.1,KubeletVersion:v1.18.0-alpha.1.836+6413f1ee2be99f,KubeProxyVersion:v1.18.0-alpha.1.836+6413f1ee2be99f,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[fedora@sha256:8fa60b88e2a7eac8460b9c0104b877f1aa0cea7fbc03c701b7e545dacccfb433 fedora:latest],SizeBytes:194281245,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.836_6413f1ee2be99f],SizeBytes:134166654,},ContainerImage{Names:[httpd@sha256:dfb792aa3fed0694d8361ec066c592381e9bdff508292eceedd1ce28fb020f71 httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/event-exporter@sha256:ab71028f7cbc851d273bb00449e30ab743d4e3be21ed2093299f718b42df0748 k8s.gcr.io/event-exporter:v0.3.1],SizeBytes:51445475,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:e10aab64506dd46c37e89ba9c962e34b0b1c91e498721b622f71af92199a06bc quay.io/k8scsi/csi-provisioner:v1.5.0],SizeBytes:47942457,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:8f4a5b1a78441cfc8a674555b772e39b91932985618fc4054d5e73d27bc45a72 quay.io/k8scsi/csi-snapshotter:v2.0.0],SizeBytes:46317165,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:e2dd4c337f1beccd298acbd717