This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-01-17 12:57
Elapsed1h11m
Revisionmaster
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/5c0d55c0-ec38-4593-a59b-51e6b8072632/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/5c0d55c0-ec38-4593-a59b-51e6b8072632/targets/test

No Test Failures!


Error lines from build-log.txt

... skipping 455 lines ...
Project: k8s-jkns-gce-ubuntu-1-6-serial
Network Project: k8s-jkns-gce-ubuntu-1-6-serial
Zone: us-west1-b
INSTANCE_GROUPS=
NODE_NAMES=
Bringing down cluster
ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
 - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.

ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
 - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.

ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
 - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.

Deleting firewall rules remaining in network bootstrap-e2e: 
W0117 13:25:26.848942  106380 loader.go:223] Config not found: /workspace/.kube/config
... skipping 144 lines ...
Trying to find master named 'bootstrap-e2e-master'
Looking for address 'bootstrap-e2e-master-ip'
Using master: bootstrap-e2e-master (external IP: 35.233.219.215; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

................Kubernetes cluster created.
Cluster "k8s-jkns-gce-ubuntu-1-6-serial_bootstrap-e2e" set.
User "k8s-jkns-gce-ubuntu-1-6-serial_bootstrap-e2e" set.
Context "k8s-jkns-gce-ubuntu-1-6-serial_bootstrap-e2e" created.
Switched to context "k8s-jkns-gce-ubuntu-1-6-serial_bootstrap-e2e".
... skipping 27 lines ...
bootstrap-e2e-master              Ready,SchedulingDisabled   <none>   24s   v1.18.0-alpha.1.854+6278df2a972d2c
bootstrap-e2e-minion-group-cksd   Ready                      <none>   23s   v1.18.0-alpha.1.854+6278df2a972d2c
bootstrap-e2e-minion-group-hs9p   Ready                      <none>   23s   v1.18.0-alpha.1.854+6278df2a972d2c
bootstrap-e2e-minion-group-l1kf   Ready                      <none>   22s   v1.18.0-alpha.1.854+6278df2a972d2c
bootstrap-e2e-minion-group-mp1q   Ready                      <none>   21s   v1.18.0-alpha.1.854+6278df2a972d2c
Validate output:
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
Cluster validation succeeded
Done, listing cluster services:
... skipping 77 lines ...
Copying 'kube-apiserver.log kube-apiserver-audit.log kube-scheduler.log kube-controller-manager.log etcd.log etcd-events.log glbc.log cluster-autoscaler.log kube-addon-manager.log fluentd.log kubelet.cov startupscript.log' from bootstrap-e2e-master

Specify --start=47012 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts/before'
Detecting nodes in the cluster
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
... skipping 11 lines ...
Specify --start=47820 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov.tmp: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=bootstrap-e2e-minion-group
NODE_NAMES=bootstrap-e2e-minion-group-cksd bootstrap-e2e-minion-group-hs9p bootstrap-e2e-minion-group-l1kf bootstrap-e2e-minion-group-mp1q
Failures for bootstrap-e2e-minion-group (if any):
2020/01/17 13:32:29 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts/before' finished in 2m12.914790428s
2020/01/17 13:32:29 process.go:153: Running: ./hack/e2e-internal/e2e-status.sh
Project: k8s-jkns-gce-ubuntu-1-6-serial
... skipping 486 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 59 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 219 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 11 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver gluster doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 291 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 111 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:32:50.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-limits-on-node-8966" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Volume limits should verify that all nodes have volume limits","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:32:51.243: INFO: Only supported for providers [azure] (not gce)
... skipping 92 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:32:54.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "cadvisor-306" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Cadvisor should be healthy on every node.","total":-1,"completed":1,"skipped":0,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Lease
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 285 lines ...
• [SLOW TEST:7.368 seconds]
[sig-apps] ReplicationController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:32:57.201: INFO: Driver nfs doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:32:57.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 50 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:32:57.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-3633" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":2,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:32:58.007: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:32:58.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 47 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-windows] Hybrid cluster network
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28
Jan 17 13:32:58.020: INFO: Only supported for node OS distro [windows] (not gci)
... skipping 93 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:32:59.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-4136" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls","total":-1,"completed":1,"skipped":15,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:33:00.113: INFO: Only supported for providers [openstack] (not gce)
... skipping 38 lines ...
• [SLOW TEST:12.817 seconds]
[sig-auth] Metadata Concealment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should run a check-metadata-concealment job to completion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/metadata_concealment.go:34
------------------------------
{"msg":"PASSED [sig-auth] Metadata Concealment should run a check-metadata-concealment job to completion","total":-1,"completed":1,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:33:02.667: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 45 lines ...
• [SLOW TEST:15.371 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Zone Support
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 61 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1054
    should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1099
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":1,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:33:11.803: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:33:11.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 3 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 79 lines ...
• [SLOW TEST:27.516 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:33:17.384: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 213 lines ...
Jan 17 13:33:11.405: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 17 13:33:11.405: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config describe pod agnhost-master-r5np9 --namespace=kubectl-9407'
Jan 17 13:33:11.910: INFO: stderr: ""
Jan 17 13:33:11.910: INFO: stdout: "Name:         agnhost-master-r5np9\nNamespace:    kubectl-9407\nPriority:     0\nNode:         bootstrap-e2e-minion-group-mp1q/10.138.0.4\nStart Time:   Fri, 17 Jan 2020 13:32:54 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  kubernetes.io/psp: e2e-test-privileged-psp\nStatus:       Running\nIP:           10.64.4.6\nIPs:\n  IP:           10.64.4.6\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://aed19cce96906a46967337b8a98f1062bead484cb85fffb178c7f1819c3ff6d1\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 17 Jan 2020 13:33:06 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qsznb (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-qsznb:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-qsznb\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                                      Message\n  ----    ------     ----  ----                                      -------\n  Normal  Scheduled  16s   default-scheduler                         Successfully assigned kubectl-9407/agnhost-master-r5np9 to bootstrap-e2e-minion-group-mp1q\n  Normal  Pulling    15s   kubelet, bootstrap-e2e-minion-group-mp1q  Pulling image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\n  Normal  Pulled     6s    kubelet, bootstrap-e2e-minion-group-mp1q  Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\n  Normal  Created    6s    kubelet, bootstrap-e2e-minion-group-mp1q  Created container agnhost-master\n  Normal  Started    5s    kubelet, bootstrap-e2e-minion-group-mp1q  Started container agnhost-master\n"
Jan 17 13:33:11.911: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config describe rc agnhost-master --namespace=kubectl-9407'
Jan 17 13:33:13.770: INFO: stderr: ""
Jan 17 13:33:13.770: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-9407\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  19s   replication-controller  Created pod: agnhost-master-r5np9\n"
Jan 17 13:33:13.770: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config describe service agnhost-master --namespace=kubectl-9407'
Jan 17 13:33:15.074: INFO: stderr: ""
Jan 17 13:33:15.074: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-9407\nLabels:            app=agnhost\n                   role=master\nAnnotations:       <none>\nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.0.148.72\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.64.4.6:6379\nSession Affinity:  None\nEvents:            <none>\n"
Jan 17 13:33:15.352: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config describe node bootstrap-e2e-master'
Jan 17 13:33:17.096: INFO: stderr: ""
Jan 17 13:33:17.096: INFO: stdout: "Name:               bootstrap-e2e-master\nRoles:              <none>\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=n1-standard-1\n                    beta.kubernetes.io/os=linux\n                    cloud.google.com/metadata-proxy-ready=true\n                    failure-domain.beta.kubernetes.io/region=us-west1\n                    failure-domain.beta.kubernetes.io/zone=us-west1-b\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=bootstrap-e2e-master\n                    kubernetes.io/os=linux\n                    node.kubernetes.io/instance-type=n1-standard-1\n                    topology.kubernetes.io/region=us-west1\n                    topology.kubernetes.io/zone=us-west1-b\nAnnotations:        node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Fri, 17 Jan 2020 13:29:39 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\n                    node.kubernetes.io/unschedulable:NoSchedule\nUnschedulable:      true\nLease:\n  HolderIdentity:  bootstrap-e2e-master\n  AcquireTime:     <unset>\n  RenewTime:       Fri, 17 Jan 2020 13:33:15 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Fri, 17 Jan 2020 13:29:52 +0000   Fri, 17 Jan 2020 13:29:52 +0000   RouteCreated                 RouteController created a route\n  MemoryPressure       False   Fri, 17 Jan 2020 13:30:41 +0000   Fri, 17 Jan 2020 13:29:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Fri, 17 Jan 2020 13:30:41 +0000   Fri, 17 Jan 2020 13:29:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Fri, 17 Jan 2020 13:30:41 +0000   Fri, 17 Jan 2020 13:29:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Fri, 17 Jan 2020 13:30:41 +0000   Fri, 17 Jan 2020 13:29:50 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   10.138.0.2\n  ExternalIP:   35.233.219.215\n  InternalDNS:  bootstrap-e2e-master.c.k8s-jkns-gce-ubuntu-1-6-serial.internal\n  Hostname:     bootstrap-e2e-master.c.k8s-jkns-gce-ubuntu-1-6-serial.internal\nCapacity:\n  attachable-volumes-gce-pd:  127\n  cpu:                        1\n  ephemeral-storage:          16293736Ki\n  hugepages-2Mi:              0\n  memory:                     3785940Ki\n  pods:                       110\nAllocatable:\n  attachable-volumes-gce-pd:  127\n  cpu:                        1\n  ephemeral-storage:          15016307073\n  hugepages-2Mi:              0\n  memory:                     3529940Ki\n  pods:                       110\nSystem Info:\n  Machine ID:                 6a0ffb467d0796f522fe17a324e9044f\n  System UUID:                6a0ffb46-7d07-96f5-22fe-17a324e9044f\n  Boot ID:                    7388700b-27cf-4f94-899b-e1874a9bd479\n  Kernel Version:             4.19.76+\n  OS Image:                   Container-Optimized OS from Google\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://19.3.1\n  Kubelet Version:            v1.18.0-alpha.1.854+6278df2a972d2c\n  Kube-Proxy Version:         v1.18.0-alpha.1.854+6278df2a972d2c\nPodCIDR:                      10.64.0.0/24\nPodCIDRs:                     10.64.0.0/24\nProviderID:                   gce://k8s-jkns-gce-ubuntu-1-6-serial/us-west1-b/bootstrap-e2e-master\nNon-terminated Pods:          (10 in total)\n  Namespace                   Name                                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                            ------------  ----------  ---------------  -------------  ---\n  kube-system                 etcd-empty-dir-cleanup-bootstrap-e2e-master     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s\n  kube-system                 etcd-server-bootstrap-e2e-master                200m (20%)    0 (0%)      0 (0%)           0 (0%)         3m8s\n  kube-system                 etcd-server-events-bootstrap-e2e-master         100m (10%)    0 (0%)      0 (0%)           0 (0%)         3m12s\n  kube-system                 fluentd-gcp-v3.2.0-zkr7k                        100m (10%)    1 (100%)    200Mi (5%)       500Mi (14%)    3m6s\n  kube-system                 kube-addon-manager-bootstrap-e2e-master         5m (0%)       0 (0%)      50Mi (1%)        0 (0%)         2m49s\n  kube-system                 kube-apiserver-bootstrap-e2e-master             250m (25%)    0 (0%)      0 (0%)           0 (0%)         3m22s\n  kube-system                 kube-controller-manager-bootstrap-e2e-master    200m (20%)    0 (0%)      0 (0%)           0 (0%)         3m27s\n  kube-system                 kube-scheduler-bootstrap-e2e-master             75m (7%)      0 (0%)      0 (0%)           0 (0%)         3m6s\n  kube-system                 l7-lb-controller-bootstrap-e2e-master           10m (1%)      0 (0%)      50Mi (1%)        0 (0%)         2m34s\n  kube-system                 metadata-proxy-v0.1-4hnjt                       32m (3%)      32m (3%)    45Mi (1%)        45Mi (1%)      3m37s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                   Requests     Limits\n  --------                   --------     ------\n  cpu                        972m (97%)   1032m (103%)\n  memory                     345Mi (10%)  545Mi (15%)\n  ephemeral-storage          0 (0%)       0 (0%)\n  attachable-volumes-gce-pd  0            0\nEvents:                      <none>\n"
... skipping 11 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1156
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:14.606 seconds]
[sig-storage] Projected configMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:33:19.814: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:33:19.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 101 lines ...
• [SLOW TEST:22.015 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should allow pods to hairpin back to themselves through services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:942
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":2,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:33:20.046: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:33:20.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 314 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:388
    should create and stop a working application  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:33:24.261: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:33:24.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 83 lines ...
• [SLOW TEST:5.415 seconds]
[sig-apps] ReplicationController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":3,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:33:25.241: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:33:25.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 63 lines ...
• [SLOW TEST:38.697 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Firewall rule should have correct firewall rules for e2e cluster","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:32:58.385: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 110 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and read from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":1,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:33:30.317: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:33:30.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 171 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":0,"failed":0}
[BeforeEach] [sig-api-machinery] Discovery
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:33:29.497: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename discovery
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in discovery-1154
... skipping 13 lines ...
• [SLOW TEST:6.545 seconds]
[sig-api-machinery] Discovery
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Custom resource should have storage version hash
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:44
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery Custom resource should have storage version hash","total":-1,"completed":3,"skipped":0,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:33:25.243: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-6962
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 17 13:33:37.245: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
... skipping 9 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:33:39.465: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:33:39.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 198 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (ext4)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":2,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] Zone Support
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 56 lines ...
• [SLOW TEST:26.853 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":2,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 21 lines ...
• [SLOW TEST:55.305 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove pods when job is deleted
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:73
------------------------------
{"msg":"PASSED [sig-apps] Job should remove pods when job is deleted","total":-1,"completed":1,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:33:45.213: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 61 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (ext3)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":1,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 74 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume one after the other
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":3,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 63 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:33:50.847: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 15 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:33:41.005: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2685
... skipping 23 lines ...
• [SLOW TEST:13.018 seconds]
[sig-storage] Projected secret
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:33:54.025: INFO: Driver csi-hostpath doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:33:54.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 36 lines ...
      Driver supports dynamic provisioning, skipping InlineVolume pattern

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:688
------------------------------
SS
------------------------------
{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:32:55.174: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-9701
... skipping 72 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume at the same time
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:243
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:33:54.377: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:33:54.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 317 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":4,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:33:54.565: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 90 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":1,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:33:55.136: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:33:55.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 69 lines ...
• [SLOW TEST:12.719 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":24,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 34 lines ...
Jan 17 13:32:57.403: INFO: creating *v1.StatefulSet: csi-mock-volumes-4732/csi-mockplugin
Jan 17 13:32:57.581: INFO: creating *v1beta1.CSIDriver: csi-mock-csi-mock-volumes-4732
Jan 17 13:32:57.859: INFO: creating *v1.StatefulSet: csi-mock-volumes-4732/csi-mockplugin-attacher
Jan 17 13:32:58.199: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4732"
STEP: Creating pod
STEP: checking for CSIInlineVolumes feature
Jan 17 13:33:36.901: INFO: Error getting logs for pod csi-inline-volume-wg2bb: the server rejected our request for an unknown reason (get pods csi-inline-volume-wg2bb)
STEP: Deleting pod csi-inline-volume-wg2bb in namespace csi-mock-volumes-4732
WARNING: pod log: csi-inline-volume-wg2bb/csi-volume-tester: pods "csi-inline-volume-wg2bb" not found
STEP: Deleting the previously created pod
Jan 17 13:33:41.728: INFO: Deleting pod "pvc-volume-tester-85kws" in namespace "csi-mock-volumes-4732"
Jan 17 13:33:41.940: INFO: Wait up to 5m0s for pod "pvc-volume-tester-85kws" to be fully deleted
STEP: Checking CSI driver logs
Jan 17 13:33:48.762: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4732","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4732","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4732","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-4732","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"csi-f36784dce31db445ad7205567953300fed15b00b271e42e737c8ab569cfd6a55","target_path":"/var/lib/kubelet/pods/51a2a879-517a-431a-97b4-b49d268aecbc/volumes/kubernetes.io~csi/my-volume/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"true","csi.storage.k8s.io/pod.name":"pvc-volume-tester-85kws","csi.storage.k8s.io/pod.namespace":"csi-mock-volumes-4732","csi.storage.k8s.io/pod.uid":"51a2a879-517a-431a-97b4-b49d268aecbc","csi.storage.k8s.io/serviceAccount.name":"default"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"csi-f36784dce31db445ad7205567953300fed15b00b271e42e737c8ab569cfd6a55","volume_path":"/var/lib/kubelet/pods/51a2a879-517a-431a-97b4-b49d268aecbc/volumes/kubernetes.io~csi/my-volume/mount"},"Response":null,"Error":"rpc error: code = NotFound desc = csi-f36784dce31db445ad7205567953300fed15b00b271e42e737c8ab569cfd6a55"}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-f36784dce31db445ad7205567953300fed15b00b271e42e737c8ab569cfd6a55","target_path":"/var/lib/kubelet/pods/51a2a879-517a-431a-97b4-b49d268aecbc/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":""}

Jan 17 13:33:48.762: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 51a2a879-517a-431a-97b4-b49d268aecbc
Jan 17 13:33:48.762: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true
Jan 17 13:33:48.762: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Jan 17 13:33:48.762: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-85kws
Jan 17 13:33:48.762: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-4732
... skipping 38 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:296
    contain ephemeral=true when using inline volume
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:346
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":1,"skipped":2,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 96 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":3,"skipped":31,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":1,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:33:57.364: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 85 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:33:58.205: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 148 lines ...
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 17 13:33:54.253: INFO: Successfully updated pod "pod-update-activedeadlineseconds-8267bf37-a567-42ea-8cf3-aa94f5b2abb7"
Jan 17 13:33:54.253: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-8267bf37-a567-42ea-8cf3-aa94f5b2abb7" in namespace "pods-2871" to be "terminated due to deadline exceeded"
Jan 17 13:33:54.443: INFO: Pod "pod-update-activedeadlineseconds-8267bf37-a567-42ea-8cf3-aa94f5b2abb7": Phase="Running", Reason="", readiness=true. Elapsed: 189.804488ms
Jan 17 13:33:56.777: INFO: Pod "pod-update-activedeadlineseconds-8267bf37-a567-42ea-8cf3-aa94f5b2abb7": Phase="Running", Reason="", readiness=true. Elapsed: 2.523815695s
Jan 17 13:33:59.181: INFO: Pod "pod-update-activedeadlineseconds-8267bf37-a567-42ea-8cf3-aa94f5b2abb7": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.928110634s
Jan 17 13:33:59.181: INFO: Pod "pod-update-activedeadlineseconds-8267bf37-a567-42ea-8cf3-aa94f5b2abb7" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:33:59.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2871" for this suite.


• [SLOW TEST:12.213 seconds]
[k8s.io] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:33:59.644: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:33:59.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 176 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume at the same time
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:243
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [sig-instrumentation] MetricsGrabber
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:34:00.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-3350" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":2,"skipped":6,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
... skipping 12 lines ...
Jan 17 13:33:28.962: INFO: Creating resource for dynamic PV
Jan 17 13:33:28.962: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(gcepd) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-expand-444-gcepd-sc7xpgx
STEP: creating a claim
STEP: Expanding non-expandable pvc
Jan 17 13:33:29.423: INFO: currentPvcSize {{5368709120 0} {<nil>} 5Gi BinarySI}, newSize {{6442450944 0} {<nil>}  BinarySI}
Jan 17 13:33:29.592: INFO: Error updating pvc gcepd22gw4: PersistentVolumeClaim "gcepd22gw4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 17 13:33:31.744: INFO: Error updating pvc gcepd22gw4: PersistentVolumeClaim "gcepd22gw4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 17 13:33:34.325: INFO: Error updating pvc gcepd22gw4: PersistentVolumeClaim "gcepd22gw4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 17 13:33:36.042: INFO: Error updating pvc gcepd22gw4: PersistentVolumeClaim "gcepd22gw4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 17 13:33:38.311: INFO: Error updating pvc gcepd22gw4: PersistentVolumeClaim "gcepd22gw4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 17 13:33:40.280: INFO: Error updating pvc gcepd22gw4: PersistentVolumeClaim "gcepd22gw4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 17 13:33:41.855: INFO: Error updating pvc gcepd22gw4: PersistentVolumeClaim "gcepd22gw4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 17 13:33:44.008: INFO: Error updating pvc gcepd22gw4: PersistentVolumeClaim "gcepd22gw4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 17 13:33:46.734: INFO: Error updating pvc gcepd22gw4: PersistentVolumeClaim "gcepd22gw4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 17 13:33:48.010: INFO: Error updating pvc gcepd22gw4: PersistentVolumeClaim "gcepd22gw4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 17 13:33:49.974: INFO: Error updating pvc gcepd22gw4: PersistentVolumeClaim "gcepd22gw4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 17 13:33:52.121: INFO: Error updating pvc gcepd22gw4: PersistentVolumeClaim "gcepd22gw4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 17 13:33:53.823: INFO: Error updating pvc gcepd22gw4: PersistentVolumeClaim "gcepd22gw4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 17 13:33:56.241: INFO: Error updating pvc gcepd22gw4: PersistentVolumeClaim "gcepd22gw4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 17 13:33:58.090: INFO: Error updating pvc gcepd22gw4: PersistentVolumeClaim "gcepd22gw4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 17 13:34:00.046: INFO: Error updating pvc gcepd22gw4: PersistentVolumeClaim "gcepd22gw4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 17 13:34:00.521: INFO: Error updating pvc gcepd22gw4: PersistentVolumeClaim "gcepd22gw4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
STEP: Deleting pvc
Jan 17 13:34:00.521: INFO: Deleting PersistentVolumeClaim "gcepd22gw4"
STEP: Deleting sc
Jan 17 13:34:01.063: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 8 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:147
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":2,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:01.342: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:34:01.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 118 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":2,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:01.467: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 109 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:34:01.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "clientset-4136" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create v1beta1 cronJobs, delete cronJobs, watch cronJobs","total":-1,"completed":2,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 21 lines ...
• [SLOW TEST:5.101 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should prevent NodePort collisions
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1755
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":4,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 23 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:02.881: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:34:02.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 63 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:34:04.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2366" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run CronJob should create a CronJob","total":-1,"completed":3,"skipped":20,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 83 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":1,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:09.092: INFO: Driver gluster doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:34:09.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 184 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:09.871: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:34:09.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 105 lines ...
Jan 17 13:33:16.573: INFO: PersistentVolumeClaim csi-hostpathg5psf found but phase is Pending instead of Bound.
Jan 17 13:33:18.651: INFO: PersistentVolumeClaim csi-hostpathg5psf found but phase is Pending instead of Bound.
Jan 17 13:33:20.879: INFO: PersistentVolumeClaim csi-hostpathg5psf found but phase is Pending instead of Bound.
Jan 17 13:33:23.063: INFO: PersistentVolumeClaim csi-hostpathg5psf found and phase=Bound (22.689745328s)
STEP: Expanding non-expandable pvc
Jan 17 13:33:23.810: INFO: currentPvcSize {{1048576 0} {<nil>} 1Mi BinarySI}, newSize {{1074790400 0} {<nil>}  BinarySI}
Jan 17 13:33:24.425: INFO: Error updating pvc csi-hostpathg5psf: persistentvolumeclaims "csi-hostpathg5psf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:33:27.097: INFO: Error updating pvc csi-hostpathg5psf: persistentvolumeclaims "csi-hostpathg5psf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:33:28.636: INFO: Error updating pvc csi-hostpathg5psf: persistentvolumeclaims "csi-hostpathg5psf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:33:30.688: INFO: Error updating pvc csi-hostpathg5psf: persistentvolumeclaims "csi-hostpathg5psf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:33:33.072: INFO: Error updating pvc csi-hostpathg5psf: persistentvolumeclaims "csi-hostpathg5psf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:33:34.825: INFO: Error updating pvc csi-hostpathg5psf: persistentvolumeclaims "csi-hostpathg5psf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:33:36.906: INFO: Error updating pvc csi-hostpathg5psf: persistentvolumeclaims "csi-hostpathg5psf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:33:39.089: INFO: Error updating pvc csi-hostpathg5psf: persistentvolumeclaims "csi-hostpathg5psf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:33:40.829: INFO: Error updating pvc csi-hostpathg5psf: persistentvolumeclaims "csi-hostpathg5psf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:33:42.797: INFO: Error updating pvc csi-hostpathg5psf: persistentvolumeclaims "csi-hostpathg5psf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:33:45.527: INFO: Error updating pvc csi-hostpathg5psf: persistentvolumeclaims "csi-hostpathg5psf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:33:46.984: INFO: Error updating pvc csi-hostpathg5psf: persistentvolumeclaims "csi-hostpathg5psf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:33:48.771: INFO: Error updating pvc csi-hostpathg5psf: persistentvolumeclaims "csi-hostpathg5psf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:33:50.585: INFO: Error updating pvc csi-hostpathg5psf: persistentvolumeclaims "csi-hostpathg5psf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:33:53.242: INFO: Error updating pvc csi-hostpathg5psf: persistentvolumeclaims "csi-hostpathg5psf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:33:54.753: INFO: Error updating pvc csi-hostpathg5psf: persistentvolumeclaims "csi-hostpathg5psf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:33:55.134: INFO: Error updating pvc csi-hostpathg5psf: persistentvolumeclaims "csi-hostpathg5psf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Jan 17 13:33:55.134: INFO: Deleting PersistentVolumeClaim "csi-hostpathg5psf"
Jan 17 13:33:55.317: INFO: Waiting up to 5m0s for PersistentVolume pvc-a1fb27b7-1e97-4074-b624-d3b45e008953 to get deleted
Jan 17 13:33:55.673: INFO: PersistentVolume pvc-a1fb27b7-1e97-4074-b624-d3b45e008953 found and phase=Bound (355.41462ms)
Jan 17 13:34:00.809: INFO: PersistentVolume pvc-a1fb27b7-1e97-4074-b624-d3b45e008953 was removed
STEP: Deleting sc
... skipping 44 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:147
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
... skipping 82 lines ...
• [SLOW TEST:42.788 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":2,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:14.026: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:34:14.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 40 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when running a container with a new image
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:263
      should be able to pull image [NodeConformance]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:374
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":5,"skipped":32,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 34 lines ...
• [SLOW TEST:20.304 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":5,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:14.878: INFO: Driver hostPathSymlink doesn't support ntfs -- skipping
... skipping 37 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152

      Driver cinder doesn't support ntfs -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:153
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:33:31.061: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 60 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":3,"skipped":19,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:34:16.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3539" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes","total":-1,"completed":6,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:17.199: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 123 lines ...
• [SLOW TEST:14.187 seconds]
[sig-storage] Projected secret
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:89
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":3,"skipped":23,"failed":0}
[BeforeEach] [sig-api-machinery] Generated clientset
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:34:14.022: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename clientset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in clientset-9013
... skipping 16 lines ...
• [SLOW TEST:10.222 seconds]
[sig-api-machinery] Generated clientset
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/generated_clientset.go:103
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod","total":-1,"completed":4,"skipped":23,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:24.250: INFO: Only supported for providers [openstack] (not gce)
... skipping 98 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":3,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:33:59.659: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-3474
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:34:25.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3474" for this suite.


• [SLOW TEST:26.412 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:34:02.888: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 31 lines ...
• [SLOW TEST:23.514 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":3,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:26.403: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:34:26.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 56 lines ...
• [SLOW TEST:29.331 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":3,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:27.559: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:34:27.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 36 lines ...
      Only supported for providers [vsphere] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1383
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:33:28.578: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 112 lines ...
• [SLOW TEST:7.723 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should create a PodDisruptionBudget
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:59
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget","total":-1,"completed":4,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:32.170: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:34:32.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 36 lines ...
      Driver supports dynamic provisioning, skipping InlineVolume pattern

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:688
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":2,"skipped":6,"failed":0}
[BeforeEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:34:24.072: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-4013
... skipping 85 lines ...
• [SLOW TEST:27.107 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":3,"skipped":6,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":4,"skipped":22,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:32.452: INFO: Only supported for providers [aws] (not gce)
... skipping 6 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Only supported for providers [aws] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1645
------------------------------
... skipping 263 lines ...
• [SLOW TEST:19.399 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: no PDB => should allow an eviction
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:151
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: no PDB =\u003e should allow an eviction","total":-1,"completed":3,"skipped":28,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 38 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:466
    that expects a client request
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:467
      should support a client that connects, sends DATA, and disconnects
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:471
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":6,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 61 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":3,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:35.201: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:34:35.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 78 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":3,"skipped":9,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:15.921 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:42.332: INFO: Driver vsphere doesn't support ntfs -- skipping
... skipping 67 lines ...
• [SLOW TEST:41.740 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":4,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:34:22.881: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 37 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":5,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:43.625: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:34:43.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 21 lines ...
Jan 17 13:34:41.046: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename pod-disks
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-disks-1696
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Pod Disks
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74
[It] should be able to delete a non-existent PD without error
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:447
STEP: delete a PD
W0117 13:34:43.196419  116842 gce_disks.go:972] GCE persistent disk "non-exist" not found in managed zones (us-west1-b)
Jan 17 13:34:43.196: INFO: Successfully deleted PD "non-exist".
[AfterEach] [sig-storage] Pod Disks
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:34:43.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-disks-1696" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Pod Disks should be able to delete a non-existent PD without error","total":-1,"completed":4,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:43.752: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:34:43.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 44 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when creating containers with AllowPrivilegeEscalation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:290
    should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:329
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:45.440: INFO: Driver gluster doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:34:45.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 46 lines ...
• [SLOW TEST:10.919 seconds]
[sig-storage] Projected secret
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:46.128: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:34:46.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 56 lines ...
      Driver azure-disk doesn't support ntfs -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:153
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":7,"skipped":50,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:34:33.375: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8951
... skipping 55 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1381
    should update the label on a resource  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":8,"skipped":50,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:46.271: INFO: Only supported for providers [vsphere] (not gce)
... skipping 84 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:46.505: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:34:46.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 49 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:88
------------------------------
S
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]","total":-1,"completed":5,"skipped":42,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:46.511: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 231 lines ...
Jan 17 13:34:33.416: INFO: Pod exec-volume-test-preprovisionedpv-8jlb no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-8jlb
Jan 17 13:34:33.416: INFO: Deleting pod "exec-volume-test-preprovisionedpv-8jlb" in namespace "volume-8901"
STEP: Deleting pv and pvc
Jan 17 13:34:33.488: INFO: Deleting PersistentVolumeClaim "pvc-lsmb2"
Jan 17 13:34:33.582: INFO: Deleting PersistentVolume "gcepd-52jd5"
Jan 17 13:34:34.818: INFO: error deleting PD "bootstrap-e2e-53e0be40-5545-4f1d-9099-ca92abfbf3c8": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-53e0be40-5545-4f1d-9099-ca92abfbf3c8' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-cksd', resourceInUseByAnotherResource
Jan 17 13:34:34.818: INFO: Couldn't delete PD "bootstrap-e2e-53e0be40-5545-4f1d-9099-ca92abfbf3c8", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-53e0be40-5545-4f1d-9099-ca92abfbf3c8' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-cksd', resourceInUseByAnotherResource
Jan 17 13:34:40.957: INFO: error deleting PD "bootstrap-e2e-53e0be40-5545-4f1d-9099-ca92abfbf3c8": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-53e0be40-5545-4f1d-9099-ca92abfbf3c8' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-cksd', resourceInUseByAnotherResource
Jan 17 13:34:40.957: INFO: Couldn't delete PD "bootstrap-e2e-53e0be40-5545-4f1d-9099-ca92abfbf3c8", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-53e0be40-5545-4f1d-9099-ca92abfbf3c8' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-cksd', resourceInUseByAnotherResource
Jan 17 13:34:48.098: INFO: Successfully deleted PD "bootstrap-e2e-53e0be40-5545-4f1d-9099-ca92abfbf3c8".
Jan 17 13:34:48.098: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:34:48.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8901" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (ext3)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":10,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":2,"skipped":1,"failed":0}
[BeforeEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:34:29.379: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-2740
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:115
STEP: Looking for a node to schedule job pod
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:34:49.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2740" for this suite.


• [SLOW TEST:21.278 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:115
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":3,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:50.659: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:34:50.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 102 lines ...
Jan 17 13:34:51.485: INFO: AfterEach: Cleaning up test resources


S [SKIPPING] in Spec Setup (BeforeEach) [2.968 seconds]
[sig-storage] PersistentVolumes:vsphere
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on vspehre volume detach [BeforeEach]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:163

  Only supported for providers [vsphere] (not gce)

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:63
------------------------------
... skipping 40 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377

      Driver gluster doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":3,"skipped":9,"failed":0}
[BeforeEach] [k8s.io] NodeLease
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:34:26.073: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename node-lease-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in node-lease-test-7240
... skipping 38 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when the NodeLease feature is enabled
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:48
    the kubelet should report node status infrequently
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:111
------------------------------
{"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":4,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:52.070: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 123 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":5,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:54.140: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 352 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":-1,"completed":5,"skipped":15,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:54.363: INFO: Only supported for providers [aws] (not gce)
... skipping 87 lines ...
      Driver hostPath doesn't support PreprovisionedPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0}
[BeforeEach] [sig-apps] DisruptionController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:34:43.231: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename disruption
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-6770
... skipping 13 lines ...
• [SLOW TEST:11.647 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: too few pods, replicaSet, percentage => should not allow an eviction
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:151
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage =\u003e should not allow an eviction","total":-1,"completed":4,"skipped":25,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:34:54.889: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 131 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":2,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 41 lines ...
• [SLOW TEST:14.822 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":6,"skipped":52,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:35:01.350: INFO: Driver gluster doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:35:01.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 151 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:121
    with Single PV - PVC pairs
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:154
      should create a non-pre-bound PV and PVC: test write access 
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:168
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","total":-1,"completed":2,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
... skipping 56 lines ...
Jan 17 13:34:30.147: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config exec gcepd-client --namespace=volume-3746 -- grep  /opt/0  /proc/mounts'
Jan 17 13:34:32.918: INFO: stderr: ""
Jan 17 13:34:32.918: INFO: stdout: "/dev/sdb /opt/0 ext4 rw,relatime 0 0\n"
STEP: cleaning the environment after gcepd
Jan 17 13:34:32.919: INFO: Deleting pod "gcepd-client" in namespace "volume-3746"
Jan 17 13:34:33.144: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
Jan 17 13:34:48.589: INFO: error deleting PD "bootstrap-e2e-78770321-25fe-4059-b4a6-9eb9a63dd0a1": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-78770321-25fe-4059-b4a6-9eb9a63dd0a1' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-l1kf', resourceInUseByAnotherResource
Jan 17 13:34:48.589: INFO: Couldn't delete PD "bootstrap-e2e-78770321-25fe-4059-b4a6-9eb9a63dd0a1", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-78770321-25fe-4059-b4a6-9eb9a63dd0a1' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-l1kf', resourceInUseByAnotherResource
Jan 17 13:34:54.767: INFO: error deleting PD "bootstrap-e2e-78770321-25fe-4059-b4a6-9eb9a63dd0a1": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-78770321-25fe-4059-b4a6-9eb9a63dd0a1' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-l1kf', resourceInUseByAnotherResource
Jan 17 13:34:54.767: INFO: Couldn't delete PD "bootstrap-e2e-78770321-25fe-4059-b4a6-9eb9a63dd0a1", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-78770321-25fe-4059-b4a6-9eb9a63dd0a1' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-l1kf', resourceInUseByAnotherResource
Jan 17 13:35:01.949: INFO: Successfully deleted PD "bootstrap-e2e-78770321-25fe-4059-b4a6-9eb9a63dd0a1".
Jan 17 13:35:01.949: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:35:01.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-3746" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (ext4)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ext4)] volumes should store data","total":-1,"completed":1,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:35:02.379: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:35:02.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 12 lines ...
      Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SS
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] crictl should be able to run crictl on the node","total":-1,"completed":6,"skipped":32,"failed":0}
[BeforeEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:34:47.568: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-4925
... skipping 24 lines ...
• [SLOW TEST:18.269 seconds]
[k8s.io] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:35:05.839: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:35:05.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 76 lines ...
Jan 17 13:34:53.926: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714864891, loc:(*time.Location)(0x7bb7ec0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714864891, loc:(*time.Location)(0x7bb7ec0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714864891, loc:(*time.Location)(0x7bb7ec0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714864890, loc:(*time.Location)(0x7bb7ec0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 17 13:34:56.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714864891, loc:(*time.Location)(0x7bb7ec0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714864891, loc:(*time.Location)(0x7bb7ec0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714864891, loc:(*time.Location)(0x7bb7ec0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714864890, loc:(*time.Location)(0x7bb7ec0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 17 13:34:58.103: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714864891, loc:(*time.Location)(0x7bb7ec0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714864891, loc:(*time.Location)(0x7bb7ec0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714864891, loc:(*time.Location)(0x7bb7ec0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714864890, loc:(*time.Location)(0x7bb7ec0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 17 13:35:01.542: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
Jan 17 13:35:02.451: INFO: Waiting for webhook configuration to be ready...
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:35:04.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 3 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101


• [SLOW TEST:20.178 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:13.637 seconds]
[sig-storage] Projected configMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":49,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:35:07.828: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 52 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Only supported for providers [openstack] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1080
------------------------------
... skipping 52 lines ...
Jan 17 13:34:53.830: INFO: Pod exec-volume-test-preprovisionedpv-rg5z no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-rg5z
Jan 17 13:34:53.830: INFO: Deleting pod "exec-volume-test-preprovisionedpv-rg5z" in namespace "volume-2853"
STEP: Deleting pv and pvc
Jan 17 13:34:54.017: INFO: Deleting PersistentVolumeClaim "pvc-x2vpz"
Jan 17 13:34:54.287: INFO: Deleting PersistentVolume "gcepd-gx77m"
Jan 17 13:34:55.706: INFO: error deleting PD "bootstrap-e2e-ac5d2b13-ac5a-43c0-84b4-2a762c91a9a2": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-ac5d2b13-ac5a-43c0-84b4-2a762c91a9a2' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-cksd', resourceInUseByAnotherResource
Jan 17 13:34:55.706: INFO: Couldn't delete PD "bootstrap-e2e-ac5d2b13-ac5a-43c0-84b4-2a762c91a9a2", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-ac5d2b13-ac5a-43c0-84b4-2a762c91a9a2' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-cksd', resourceInUseByAnotherResource
Jan 17 13:35:01.918: INFO: error deleting PD "bootstrap-e2e-ac5d2b13-ac5a-43c0-84b4-2a762c91a9a2": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-ac5d2b13-ac5a-43c0-84b4-2a762c91a9a2' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-cksd', resourceInUseByAnotherResource
Jan 17 13:35:01.918: INFO: Couldn't delete PD "bootstrap-e2e-ac5d2b13-ac5a-43c0-84b4-2a762c91a9a2", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-ac5d2b13-ac5a-43c0-84b4-2a762c91a9a2' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-cksd', resourceInUseByAnotherResource
Jan 17 13:35:09.071: INFO: Successfully deleted PD "bootstrap-e2e-ac5d2b13-ac5a-43c0-84b4-2a762c91a9a2".
Jan 17 13:35:09.071: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:35:09.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-2853" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:35:10.328: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:35:10.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 66 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with privileged
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:226
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":40,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:29.362 seconds]
[k8s.io] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a local redirect http liveness probe
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:232
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":5,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:35:11.701: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 50 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver emptydir doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 105 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: nfs]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver "nfs" does not support topology - skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:96
------------------------------
... skipping 63 lines ...
• [SLOW TEST:20.541 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":3,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:35:12.041: INFO: Only supported for providers [vsphere] (not gce)
... skipping 56 lines ...
Jan 17 13:35:12.495: INFO: stderr: ""
Jan 17 13:35:12.495: INFO: stdout: "etcd-1 etcd-0 controller-manager scheduler"
STEP: getting details of componentstatuses
STEP: getting status of etcd-1
Jan 17 13:35:12.495: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config get componentstatuses etcd-1'
Jan 17 13:35:12.848: INFO: stderr: ""
Jan 17 13:35:12.848: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-1   Healthy   {\"health\":\"true\"}   \n"
STEP: getting status of etcd-0
Jan 17 13:35:12.848: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config get componentstatuses etcd-0'
Jan 17 13:35:13.168: INFO: stderr: ""
Jan 17 13:35:13.168: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-0   Healthy   {\"health\":\"true\"}   \n"
STEP: getting status of controller-manager
Jan 17 13:35:13.168: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config get componentstatuses controller-manager'
Jan 17 13:35:13.504: INFO: stderr: ""
Jan 17 13:35:13.504: INFO: stdout: "NAME                 STATUS    MESSAGE   ERROR\ncontroller-manager   Healthy   ok        \n"
STEP: getting status of scheduler
Jan 17 13:35:13.504: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config get componentstatuses scheduler'
Jan 17 13:35:13.830: INFO: stderr: ""
Jan 17 13:35:13.830: INFO: stdout: "NAME        STATUS    MESSAGE   ERROR\nscheduler   Healthy   ok        \n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:35:13.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9241" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":3,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:12.674 seconds]
[sig-node] Downward API
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:108
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":8,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 10 lines ...
Jan 17 13:34:34.662: INFO: stdout: "NAMESPACE                         NAME                                                         TYPE                                  DATA   AGE\nclientset-9013                    default-token-29b96                                          kubernetes.io/service-account-token   3      20s\nconfigmap-912                     default-token-v7hq7                                          kubernetes.io/service-account-token   3      6s\ncontainer-probe-9310              default-token-2cnpk                                          kubernetes.io/service-account-token   3      99s\ncontainer-runtime-2482            default-token-2l5fk                                          kubernetes.io/service-account-token   3      32s\ncronjob-2900                      default-token-2v72h                                          kubernetes.io/service-account-token   3      33s\ncsi-mock-volumes-6067             csi-attacher-token-c9kfg                                     kubernetes.io/service-account-token   3      97s\ncsi-mock-volumes-6067             csi-mock-token-74bx6                                         kubernetes.io/service-account-token   3      91s\ncsi-mock-volumes-6067             csi-provisioner-token-w2npw                                  kubernetes.io/service-account-token   3      95s\ncsi-mock-volumes-6067             csi-resizer-token-4nfh4                                      kubernetes.io/service-account-token   3      93s\ncsi-mock-volumes-6067             default-token-b7x96                                          kubernetes.io/service-account-token   3      99s\ncustom-resource-definition-6558   default-token-ztqct                                          kubernetes.io/service-account-token   3      18s\ndefault                           default-token-bx5zs                                          kubernetes.io/service-account-token   3      5m9s\ndeployment-8681                   default-token-4h25c                                          kubernetes.io/service-account-token   3      28s\ndisruption-2144                   default-token-s9fzn                                          kubernetes.io/service-account-token   3      19s\ndisruption-3450                   default-token-dv6sm                                          kubernetes.io/service-account-token   3      8s\ngc-4013                           default-token-gpn79                                          kubernetes.io/service-account-token   3      9s\ngcp-volume-9546                   default-token-4nq8s                                          kubernetes.io/service-account-token   3      5s\njob-2740                          default-token-c5whw                                          kubernetes.io/service-account-token   3      2s\njob-3474                          default-token-q8w7h                                          kubernetes.io/service-account-token   3      34s\nkube-node-lease                   default-token-g5v5q                                          kubernetes.io/service-account-token   3      5m9s\nkube-public                       default-token-zhds2                                          kubernetes.io/service-account-token   3      5m9s\nkube-system                       attachdetach-controller-token-ln8t9                          kubernetes.io/service-account-token   3      5m10s\nkube-system                       certificate-controller-token-btlss                           kubernetes.io/service-account-token   3      5m22s\nkube-system                       cloud-provider-token-ckmxb                                   kubernetes.io/service-account-token   3      5m25s\nkube-system                       clusterrole-aggregation-controller-token-rpld2               kubernetes.io/service-account-token   3      5m22s\nkube-system                       coredns-token-zvzn9                                          kubernetes.io/service-account-token   3      5m6s\nkube-system                       cronjob-controller-token-b2dzt                               kubernetes.io/service-account-token   3      5m20s\nkube-system                       daemon-set-controller-token-vxflg                            kubernetes.io/service-account-token   3      5m21s\nkube-system                       default-token-s9v9h                                          kubernetes.io/service-account-token   3      5m9s\nkube-system                       deployment-controller-token-2s4dw                            kubernetes.io/service-account-token   3      5m9s\nkube-system                       disruption-controller-token-nq6xm                            kubernetes.io/service-account-token   3      5m21s\nkube-system                       endpoint-controller-token-ptcrc                              kubernetes.io/service-account-token   3      5m9s\nkube-system                       event-exporter-sa-token-wk9rw                                kubernetes.io/service-account-token   3      5m6s\nkube-system                       expand-controller-token-ss2jt                                kubernetes.io/service-account-token   3      5m9s\nkube-system                       fluentd-gcp-scaler-token-22ktn                               kubernetes.io/service-account-token   3      5m5s\nkube-system                       fluentd-gcp-token-4mg77                                      kubernetes.io/service-account-token   3      5m6s\nkube-system                       generic-garbage-collector-token-mbjsx                        kubernetes.io/service-account-token   3      5m24s\nkube-system                       horizontal-pod-autoscaler-token-s4kht                        kubernetes.io/service-account-token   3      5m21s\nkube-system                       job-controller-token-tmb2r                                   kubernetes.io/service-account-token   3      5m21s\nkube-system                       kube-dns-autoscaler-token-bjs7t                              kubernetes.io/service-account-token   3      4m59s\nkube-system                       kubernetes-dashboard-certs                                   Opaque                                0      5m7s\nkube-system                       kubernetes-dashboard-key-holder                              Opaque                                2      5m7s\nkube-system                       kubernetes-dashboard-token-mqbbr                             kubernetes.io/service-account-token   3      4m59s\nkube-system                       metadata-proxy-token-6hblr                                   kubernetes.io/service-account-token   3      5m5s\nkube-system                       metrics-server-token-s6dkx                                   kubernetes.io/service-account-token   3      5m4s\nkube-system                       namespace-controller-token-tb4bk                             kubernetes.io/service-account-token   3      5m9s\nkube-system                       node-controller-token-7zggz                                  kubernetes.io/service-account-token   3      5m22s\nkube-system                       persistent-volume-binder-token-cvrgz                         kubernetes.io/service-account-token   3      5m22s\nkube-system                       pod-garbage-collector-token-47rpb                            kubernetes.io/service-account-token   3      5m10s\nkube-system                       pv-protection-controller-token-9kzkk                         kubernetes.io/service-account-token   3      5m25s\nkube-system                       pvc-protection-controller-token-clmtl                        kubernetes.io/service-account-token   3      5m21s\nkube-system                       replicaset-controller-token-ftwql                            kubernetes.io/service-account-token   3      5m22s\nkube-system                       replication-controller-token-7dph6                           kubernetes.io/service-account-token   3      5m9s\nkube-system                       resourcequota-controller-token-t6cjv                         kubernetes.io/service-account-token   3      5m22s\nkube-system                       route-controller-token-5rw69                                 kubernetes.io/service-account-token   3      5m21s\nkube-system                       service-account-controller-token-9l6rm                       kubernetes.io/service-account-token   3      5m9s\nkube-system                       service-controller-token-8dx2v                               kubernetes.io/service-account-token   3      5m10s\nkube-system                       statefulset-controller-token-w7p4r                           kubernetes.io/service-account-token   3      5m10s\nkube-system                       ttl-controller-token-mh2p7                                   kubernetes.io/service-account-token   3      5m10s\nkube-system                       volume-snapshot-controller-token-87mbm                       kubernetes.io/service-account-token   3      5m1s\nkubectl-2366                      default-token-2p2g5                                          kubernetes.io/service-account-token   3      32s\nkubectl-2531                      default-token-nc2gc                                          kubernetes.io/service-account-token   3      1s\nkubectl-2531                      secret1mt9p7dghkt                                            Opaque                                1      0s\nkubectl-3539                      default-token-wllkn                                          kubernetes.io/service-account-token   3      19s\nkubectl-8951                      default-token-ngpns                                          kubernetes.io/service-account-token   3      1s\nnode-lease-test-7240              default-token-vz7n2                                          kubernetes.io/service-account-token   3      7s\nport-forwarding-6964              default-token-vhmgg                                          kubernetes.io/service-account-token   3      18s\nprojected-24                      default-token-z979w                                          kubernetes.io/service-account-token   3      20s\nprojected-9107                    default-token-dwkhc                                          kubernetes.io/service-account-token   3      24s\nprojected-9107                    projected-secret-test-a8a679b2-0c55-40ba-a653-dd88434a1e4a   Opaque                                3      21s\nprovisioning-1359                 default-token-8bzk4                                          kubernetes.io/service-account-token   3      104s\nprovisioning-1561                 default-token-d6mdx                                          kubernetes.io/service-account-token   3      65s\nprovisioning-2307                 default-token-t9kvn                                          kubernetes.io/service-account-token   3      63s\nprovisioning-2688                 default-token-6sg6m                                          kubernetes.io/service-account-token   3      33s\nprovisioning-2872                 default-token-dljlb                                          kubernetes.io/service-account-token   3      10s\nprovisioning-4445                 default-token-j5c9g                                          kubernetes.io/service-account-token   3      1s\nprovisioning-4887                 default-token-6hlk7                                          kubernetes.io/service-account-token   3      40s\nprovisioning-6230                 default-token-nkkmc                                          kubernetes.io/service-account-token   3      52s\nprovisioning-8481                 default-token-x7qsz                                          kubernetes.io/service-account-token   3      52s\nprovisioning-8537                 default-token-vs4bd                                          kubernetes.io/service-account-token   3      35s\nprovisioning-9355                 default-token-dl4xz                                          kubernetes.io/service-account-token   3      33s\npv-2914                           default-token-twk9g                                          kubernetes.io/service-account-token   3      44s\nsecret-namespace-8980             default-token-rvgqt                                          kubernetes.io/service-account-token   3      22s\nsecret-namespace-8980             projected-secret-test-a8a679b2-0c55-40ba-a653-dd88434a1e4a   Opaque                                1      22s\nsecurity-context-7437             default-token-shr6m                                          kubernetes.io/service-account-token   3      1s\nsecurity-context-test-6061        default-token-tgr9j                                          kubernetes.io/service-account-token   3      0s\nservices-135                      default-token-7wm86                                          kubernetes.io/service-account-token   3      15s\nservices-5413                     default-token-2h58d                                          kubernetes.io/service-account-token   3      32s\nservices-5744                     default-token-sz5dp                                          kubernetes.io/service-account-token   3      63s\nstatefulset-1548                  default-token-sslzp                                          kubernetes.io/service-account-token   3      97s\nvolume-2853                       default-token-72rwq                                          kubernetes.io/service-account-token   3      38s\nvolume-3746                       default-token-4ks85                                          kubernetes.io/service-account-token   3      98s\nvolume-6261                       default-token-d4pml                                          kubernetes.io/service-account-token   3      92s\nvolume-7000                       default-token-5hvk7                                          kubernetes.io/service-account-token   3      100s\nvolume-8901                       default-token-8r4vb                                          kubernetes.io/service-account-token   3      47s\nvolume-expand-7397                csi-attacher-token-mlxm5                                     kubernetes.io/service-account-token   3      7s\nvolume-expand-7397                csi-provisioner-token-2kz9w                                  kubernetes.io/service-account-token   3      5s\nvolume-expand-7397                csi-resizer-token-jcwnr                                      kubernetes.io/service-account-token   3      1s\nvolume-expand-7397                csi-snapshotter-token-drh6r                                  kubernetes.io/service-account-token   3      2s\nvolume-expand-7397                default-token-b5hmx                                          kubernetes.io/service-account-token   3      9s\nvolume-expand-8398                default-token-628qn                                          kubernetes.io/service-account-token   3      104s\nvolumemode-2959                   default-token-4nr98                                          kubernetes.io/service-account-token   3      73s\nvolumemode-5109                   default-token-t48wd                                          kubernetes.io/service-account-token   3      25s\nvolumemode-7125                   default-token-56mf4                                          kubernetes.io/service-account-token   3      2s\nvolumemode-8148                   default-token-q2lg6                                          kubernetes.io/service-account-token   3      95s\nwebhook-1939-markers              default-token-cmvm7                                          kubernetes.io/service-account-token   3      33s\nwebhook-1939                      default-token-n7tn9                                          kubernetes.io/service-account-token   3      34s\nwebhook-3926-markers              default-token-bvxxm                                          kubernetes.io/service-account-token   3      28s\nwebhook-3926                      default-token-kqqbn                                          kubernetes.io/service-account-token   3      31s\nwebhook-526-markers               default-token-pdmg4                                          kubernetes.io/service-account-token   3      35s\nwebhook-526                       default-token-gfz4z                                          kubernetes.io/service-account-token   3      39s\n"
Jan 17 13:34:35.177: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config get persistentvolumeclaims --all-namespaces'
Jan 17 13:34:35.515: INFO: stderr: ""
Jan 17 13:34:35.515: INFO: stdout: "NAMESPACE           NAME             STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                    AGE\nkubectl-2531        pvc1mt9p7dghkt   Pending                                                                        standard                        0s\nprovisioning-6230   nfsnwtgh         Bound     pvc-0f2b209f-0879-46de-8188-068aaf8bdd4d   5Gi        RWO            provisioning-6230-nfs-scztvjd   26s\nprovisioning-9355   pvc-gxg74        Bound     local-2jd2n                                2Gi        RWO            provisioning-9355               20s\npv-2914             pvc-dh5m8        Bound     nfs-vv9zc                                  2Gi        RWO                                            3s\nvolume-2853         pvc-x2vpz        Bound     gcepd-gx77m                                2Gi        RWO            volume-2853                     25s\nvolume-6261         pvc-5clft        Bound     gluster-6sq9n                              2Gi        RWO            volume-6261                     63s\n"
Jan 17 13:34:35.884: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config get pods --all-namespaces'
Jan 17 13:34:36.245: INFO: stderr: ""
Jan 17 13:34:36.245: INFO: stdout: "NAMESPACE                    NAME                                                    READY   STATUS              RESTARTS   AGE\nconfigmap-912                pod-configmaps-333c2d29-5eeb-4755-9852-42073b279c51     0/1     ContainerCreating   0          6s\ncontainer-probe-9310         busybox-4ef43f01-cad3-4189-875e-d9443eab5ce0            1/1     Running             0          100s\ncsi-mock-volumes-6067        csi-mockplugin-0                                        3/3     Running             0          90s\ncsi-mock-volumes-6067        csi-mockplugin-attacher-0                               1/1     Running             0          90s\ndeployment-8681              test-rolling-update-deployment-67cf4f6444-7trmk         1/1     Running             0          17s\ndisruption-2144              pod-0                                                   1/1     Terminating         0          20s\ngcp-volume-9546              gluster-server                                          0/1     ContainerCreating   0          4s\njob-2740                     fail-once-non-local-fbpdc                               0/1     ContainerCreating   0          3s\njob-2740                     fail-once-non-local-vqp68                               0/1     ContainerCreating   0          3s\njob-3474                     fail-once-local-lk8nv                                   0/1     Completed           1          24s\njob-3474                     fail-once-local-qxkn4                                   0/1     Completed           1          35s\njob-3474                     fail-once-local-rq2cm                                   0/1     Completed           1          26s\njob-3474                     fail-once-local-v7mzs                                   0/1     Completed           1          35s\nkube-system                  coredns-65567c7b57-sbrn5                                1/1     Running             0          4m33s\nkube-system                  coredns-65567c7b57-vgx2l                                1/1     Running             0          5m3s\nkube-system                  etcd-empty-dir-cleanup-bootstrap-e2e-master             1/1     Running             0          4m14s\nkube-system                  etcd-server-bootstrap-e2e-master                        1/1     Running             0          4m27s\nkube-system                  etcd-server-events-bootstrap-e2e-master                 1/1     Running             0          4m31s\nkube-system                  event-exporter-v0.3.1-747b47fcd-757kq                   2/2     Running             0          5m8s\nkube-system                  fluentd-gcp-scaler-76d9c77b4d-v7lw9                     1/1     Running             0          5m1s\nkube-system                  fluentd-gcp-v3.2.0-2j564                                2/2     Running             0          3m59s\nkube-system                  fluentd-gcp-v3.2.0-cgd45                                2/2     Running             0          3m46s\nkube-system                  fluentd-gcp-v3.2.0-kr7d8                                2/2     Running             0          4m4s\nkube-system                  fluentd-gcp-v3.2.0-tqmf5                                2/2     Running             0          3m36s\nkube-system                  fluentd-gcp-v3.2.0-zkr7k                                2/2     Running             0          4m25s\nkube-system                  kube-addon-manager-bootstrap-e2e-master                 1/1     Running             0          4m8s\nkube-system                  kube-apiserver-bootstrap-e2e-master                     1/1     Running             0          4m41s\nkube-system                  kube-controller-manager-bootstrap-e2e-master            1/1     Running             0          4m46s\nkube-system                  kube-dns-autoscaler-65bc6d4889-ml5rx                    1/1     Running             0          4m57s\nkube-system                  kube-proxy-bootstrap-e2e-minion-group-cksd              1/1     Running             0          4m53s\nkube-system                  kube-proxy-bootstrap-e2e-minion-group-hs9p              1/1     Running             0          4m56s\nkube-system                  kube-proxy-bootstrap-e2e-minion-group-l1kf              1/1     Running             0          4m53s\nkube-system                  kube-proxy-bootstrap-e2e-minion-group-mp1q              1/1     Running             0          4m52s\nkube-system                  kube-scheduler-bootstrap-e2e-master                     1/1     Running             0          4m25s\nkube-system                  kubernetes-dashboard-7778f8b456-tkqpc                   1/1     Running             0          5m1s\nkube-system                  l7-default-backend-678889f899-7nh6w                     1/1     Running             0          5m3s\nkube-system                  l7-lb-controller-bootstrap-e2e-master                   1/1     Running             2          3m53s\nkube-system                  metadata-proxy-v0.1-2hrsk                               2/2     Running             0          4m53s\nkube-system                  metadata-proxy-v0.1-4hnjt                               2/2     Running             0          4m56s\nkube-system                  metadata-proxy-v0.1-8ll7f                               2/2     Running             0          4m54s\nkube-system                  metadata-proxy-v0.1-dkm8f                               2/2     Running             0          4m54s\nkube-system                  metadata-proxy-v0.1-ltzzx                               2/2     Running             0          4m56s\nkube-system                  metrics-server-v0.3.6-5f859c87d6-b9nsp                  2/2     Running             0          4m27s\nkube-system                  volume-snapshot-controller-0                            1/1     Running             0          4m53s\nkubectl-2531                 pod1mt9p7dghkt                                          0/1     Pending             0          1s\nkubectl-8951                 pause                                                   0/1     ContainerCreating   0          1s\nport-forwarding-6964         pfpod                                                   0/2     Completed           0          19s\nprojected-24                 labelsupdatea40418f8-51bb-4c67-83ed-6facca8fefcf        1/1     Running             0          21s\nprovisioning-2688            external-provisioner-cn765                              0/1     ContainerCreating   0          34s\nprovisioning-2872            pod-subpath-test-inlinevolume-hp2b                      0/2     Init:0/2            0          4s\nprovisioning-6230            external-provisioner-gmfzp                              1/1     Running             0          52s\nprovisioning-6230            pod-subpath-test-dynamicpv-n6w9                         0/1     ContainerCreating   0          7s\nprovisioning-8537            external-provisioner-5jdkq                              0/1     ContainerCreating   0          36s\nprovisioning-9355            hostexec-bootstrap-e2e-minion-group-mp1q-zd6q2          1/1     Running             0          27s\nprovisioning-9355            pod-subpath-test-preprovisionedpv-zlzp                  0/2     Init:0/2            0          4s\npv-2914                      nfs-server                                              1/1     Running             0          45s\npv-2914                      pvc-tester-csqd9                                        0/1     ContainerCreating   0          1s\nsecurity-context-7437        security-context-bcbfffc0-d04f-4c25-b238-368ee6ebdda3   0/1     ContainerCreating   0          3s\nsecurity-context-test-6061   alpine-nnp-nil-bd64228b-dfd2-4038-9364-7a42f2202c4f     0/1     ContainerCreating   0          1s\nservices-135                 hostexec                                                1/1     Running             0          13s\nservices-5413                execpod24s8x                                            1/1     Running             0          24s\nservices-5413                externalname-service-5f6kw                              1/1     Running             0          33s\nservices-5413                externalname-service-zpns9                              1/1     Running             0          33s\nservices-5744                execpod9mmhl                                            1/1     Running             0          50s\nstatefulset-1548             ss2-0                                                   1/1     Running             0          46s\nstatefulset-1548             ss2-1                                                   1/1     Running             0          40s\nstatefulset-1548             ss2-2                                                   1/1     Running             0          30s\nvolume-2853                  exec-volume-test-preprovisionedpv-rg5z                  0/1     ContainerCreating   0          8s\nvolume-3746                  gcepd-client                                            1/1     Terminating         0          37s\nvolume-6261                  gluster-client                                          1/1     Terminating         0          20s\nvolume-6261                  gluster-server                                          1/1     Running             0          87s\nvolume-expand-7397           csi-hostpath-attacher-0                                 0/1     Pending             0          1s\nvolume-expand-7397           csi-hostpath-provisioner-0                              0/1     Pending             0          1s\nvolume-expand-7397           csi-hostpath-resizer-0                                  0/1     ContainerCreating   0          1s\nvolume-expand-7397           csi-hostpathplugin-0                                    0/3     ContainerCreating   0          2s\nvolume-expand-7397           csi-snapshotter-0                                       0/1     Pending             0          1s\nvolumemode-5109              external-provisioner-m5hmh                              0/1     ContainerCreating   0          26s\nvolumemode-8148              gluster-server                                          1/1     Terminating         0          89s\nwebhook-3926                 webhook-to-be-mutated                                   0/1     Init:ErrImagePull   0          15s\n"
Jan 17 13:34:36.688: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config get events --all-namespaces'
Jan 17 13:34:39.022: INFO: stderr: ""
Jan 17 13:34:39.023: INFO: stdout: "NAMESPACE                    LAST SEEN   TYPE      REASON                    OBJECT                                                           MESSAGE\nclientset-9013               21s         Normal    Scheduled                 pod/poda58e2fe7-95ec-43ad-bef0-89a0698e7a70                      Successfully assigned clientset-9013/poda58e2fe7-95ec-43ad-bef0-89a0698e7a70 to bootstrap-e2e-minion-group-hs9p\nclientset-9013               18s         Normal    Pulled                    pod/poda58e2fe7-95ec-43ad-bef0-89a0698e7a70                      Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nclientset-9013               18s         Normal    Created                   pod/poda58e2fe7-95ec-43ad-bef0-89a0698e7a70                      Created container nginx\nclientset-9013               16s         Normal    Started                   pod/poda58e2fe7-95ec-43ad-bef0-89a0698e7a70                      Started container nginx\nconfigmap-912                6s          Normal    Scheduled                 pod/pod-configmaps-333c2d29-5eeb-4755-9852-42073b279c51          Successfully assigned configmap-912/pod-configmaps-333c2d29-5eeb-4755-9852-42073b279c51 to bootstrap-e2e-minion-group-cksd\nconfigmap-912                3s          Normal    Pulled                    pod/pod-configmaps-333c2d29-5eeb-4755-9852-42073b279c51          Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nconfigmap-912                3s          Normal    Created                   pod/pod-configmaps-333c2d29-5eeb-4755-9852-42073b279c51          Created container configmap-volume-test\nconfigmap-912                2s          Normal    Started                   pod/pod-configmaps-333c2d29-5eeb-4755-9852-42073b279c51          Started container configmap-volume-test\ncontainer-probe-9310         101s        Normal    Scheduled                 pod/busybox-4ef43f01-cad3-4189-875e-d9443eab5ce0                 Successfully assigned container-probe-9310/busybox-4ef43f01-cad3-4189-875e-d9443eab5ce0 to bootstrap-e2e-minion-group-hs9p\ncontainer-probe-9310         100s        Normal    Pulling                   pod/busybox-4ef43f01-cad3-4189-875e-d9443eab5ce0                 Pulling image \"docker.io/library/busybox:1.29\"\ncontainer-probe-9310         99s         Normal    Pulled                    pod/busybox-4ef43f01-cad3-4189-875e-d9443eab5ce0                 Successfully pulled image \"docker.io/library/busybox:1.29\"\ncontainer-probe-9310         99s         Normal    Created                   pod/busybox-4ef43f01-cad3-4189-875e-d9443eab5ce0                 Created container busybox\ncontainer-probe-9310         99s         Normal    Started                   pod/busybox-4ef43f01-cad3-4189-875e-d9443eab5ce0                 Started container busybox\ncontainer-runtime-2482       33s         Normal    Scheduled                 pod/image-pull-test564218f8-b311-4f73-9b99-0ee65799c5ad          Successfully assigned container-runtime-2482/image-pull-test564218f8-b311-4f73-9b99-0ee65799c5ad to bootstrap-e2e-minion-group-hs9p\ncontainer-runtime-2482       30s         Normal    Pulling                   pod/image-pull-test564218f8-b311-4f73-9b99-0ee65799c5ad          Pulling image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\ncontainer-runtime-2482       30s         Normal    Pulled                    pod/image-pull-test564218f8-b311-4f73-9b99-0ee65799c5ad          Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\ncontainer-runtime-2482       30s         Normal    Created                   pod/image-pull-test564218f8-b311-4f73-9b99-0ee65799c5ad          Created container image-pull-test\ncontainer-runtime-2482       29s         Normal    Started                   pod/image-pull-test564218f8-b311-4f73-9b99-0ee65799c5ad          Started container image-pull-test\ncontainer-runtime-2482       22s         Normal    Killing                   pod/image-pull-test564218f8-b311-4f73-9b99-0ee65799c5ad          Stopping container image-pull-test\ncsi-mock-volumes-6067        87s         Normal    Pulling                   pod/csi-mockplugin-0                                             Pulling image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\ncsi-mock-volumes-6067        81s         Normal    Pulled                    pod/csi-mockplugin-0                                             Successfully pulled image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\ncsi-mock-volumes-6067        80s         Normal    Created                   pod/csi-mockplugin-0                                             Created container csi-provisioner\ncsi-mock-volumes-6067        79s         Normal    Started                   pod/csi-mockplugin-0                                             Started container csi-provisioner\ncsi-mock-volumes-6067        79s         Normal    Pulled                    pod/csi-mockplugin-0                                             Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\ncsi-mock-volumes-6067        79s         Normal    Created                   pod/csi-mockplugin-0                                             Created container driver-registrar\ncsi-mock-volumes-6067        78s         Normal    Started                   pod/csi-mockplugin-0                                             Started container driver-registrar\ncsi-mock-volumes-6067        78s         Normal    Pulling                   pod/csi-mockplugin-0                                             Pulling image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-6067        75s         Normal    Pulled                    pod/csi-mockplugin-0                                             Successfully pulled image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-6067        75s         Normal    Created                   pod/csi-mockplugin-0                                             Created container mock\ncsi-mock-volumes-6067        75s         Normal    Started                   pod/csi-mockplugin-0                                             Started container mock\ncsi-mock-volumes-6067        87s         Normal    Pulling                   pod/csi-mockplugin-attacher-0                                    Pulling image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\ncsi-mock-volumes-6067        81s         Normal    Pulled                    pod/csi-mockplugin-attacher-0                                    Successfully pulled image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\ncsi-mock-volumes-6067        81s         Normal    Created                   pod/csi-mockplugin-attacher-0                                    Created container csi-attacher\ncsi-mock-volumes-6067        79s         Normal    Started                   pod/csi-mockplugin-attacher-0                                    Started container csi-attacher\ncsi-mock-volumes-6067        91s         Normal    SuccessfulCreate          statefulset/csi-mockplugin-attacher                              create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\ncsi-mock-volumes-6067        91s         Normal    SuccessfulCreate          statefulset/csi-mockplugin                                       create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-6067        86s         Normal    ExternalProvisioning      persistentvolumeclaim/pvc-hrxkc                                  waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-6067\" or manually created by system administrator\ncsi-mock-volumes-6067        74s         Normal    Provisioning              persistentvolumeclaim/pvc-hrxkc                                  External provisioner is provisioning volume for claim \"csi-mock-volumes-6067/pvc-hrxkc\"\ncsi-mock-volumes-6067        74s         Normal    ProvisioningSucceeded     persistentvolumeclaim/pvc-hrxkc                                  Successfully provisioned volume pvc-702b0b1f-3f68-4a88-8ee5-d4d784438dbe\ncsi-mock-volumes-6067        70s         Normal    SuccessfulAttachVolume    pod/pvc-volume-tester-kpm8w                                      AttachVolume.Attach succeeded for volume \"pvc-702b0b1f-3f68-4a88-8ee5-d4d784438dbe\"\ncsi-mock-volumes-6067        52s         Normal    Pulled                    pod/pvc-volume-tester-kpm8w                                      Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-6067        52s         Normal    Created                   pod/pvc-volume-tester-kpm8w                                      Created container volume-tester\ncsi-mock-volumes-6067        51s         Normal    Started                   pod/pvc-volume-tester-kpm8w                                      Started container volume-tester\ncsi-mock-volumes-6067        49s         Normal    Killing                   pod/pvc-volume-tester-kpm8w                                      Stopping container volume-tester\ndefault                      4m57s       Normal    RegisteredNode            node/bootstrap-e2e-master                                        Node bootstrap-e2e-master event: Registered Node bootstrap-e2e-master in Controller\ndefault                      4m57s       Normal    Starting                  node/bootstrap-e2e-minion-group-cksd                             Starting kubelet.\ndefault                      4m56s       Normal    NodeHasSufficientMemory   node/bootstrap-e2e-minion-group-cksd                             Node bootstrap-e2e-minion-group-cksd status is now: NodeHasSufficientMemory\ndefault                      4m56s       Normal    NodeHasNoDiskPressure     node/bootstrap-e2e-minion-group-cksd                             Node bootstrap-e2e-minion-group-cksd status is now: NodeHasNoDiskPressure\ndefault                      4m56s       Normal    NodeHasSufficientPID      node/bootstrap-e2e-minion-group-cksd                             Node bootstrap-e2e-minion-group-cksd status is now: NodeHasSufficientPID\ndefault                      4m57s       Normal    NodeAllocatableEnforced   node/bootstrap-e2e-minion-group-cksd                             Updated Node Allocatable limit across pods\ndefault                      4m55s       Warning   ContainerdStart           node/bootstrap-e2e-minion-group-cksd                             Starting containerd container runtime...\ndefault                      4m55s       Warning   DockerStart               node/bootstrap-e2e-minion-group-cksd                             Starting Docker Application Container Engine...\ndefault                      4m55s       Warning   KubeletStart              node/bootstrap-e2e-minion-group-cksd                             Started Kubernetes kubelet.\ndefault                      4m55s       Normal    NodeReady                 node/bootstrap-e2e-minion-group-cksd                             Node bootstrap-e2e-minion-group-cksd status is now: NodeReady\ndefault                      4m54s       Normal    Starting                  node/bootstrap-e2e-minion-group-cksd                             Starting kube-proxy.\ndefault                      4m52s       Normal    RegisteredNode            node/bootstrap-e2e-minion-group-cksd                             Node bootstrap-e2e-minion-group-cksd event: Registered Node bootstrap-e2e-minion-group-cksd in Controller\ndefault                      4m57s       Normal    Starting                  node/bootstrap-e2e-minion-group-hs9p                             Starting kubelet.\ndefault                      4m57s       Normal    NodeHasSufficientMemory   node/bootstrap-e2e-minion-group-hs9p                             Node bootstrap-e2e-minion-group-hs9p status is now: NodeHasSufficientMemory\ndefault                      4m57s       Normal    NodeHasNoDiskPressure     node/bootstrap-e2e-minion-group-hs9p                             Node bootstrap-e2e-minion-group-hs9p status is now: NodeHasNoDiskPressure\ndefault                      4m57s       Normal    NodeHasSufficientPID      node/bootstrap-e2e-minion-group-hs9p                             Node bootstrap-e2e-minion-group-hs9p status is now: NodeHasSufficientPID\ndefault                      4m57s       Normal    NodeAllocatableEnforced   node/bootstrap-e2e-minion-group-hs9p                             Updated Node Allocatable limit across pods\ndefault                      4m57s       Normal    RegisteredNode            node/bootstrap-e2e-minion-group-hs9p                             Node bootstrap-e2e-minion-group-hs9p event: Registered Node bootstrap-e2e-minion-group-hs9p in Controller\ndefault                      4m54s       Warning   ContainerdStart           node/bootstrap-e2e-minion-group-hs9p                             Starting containerd container runtime...\ndefault                      4m54s       Warning   DockerStart               node/bootstrap-e2e-minion-group-hs9p                             Starting Docker Application Container Engine...\ndefault                      4m54s       Normal    Starting                  node/bootstrap-e2e-minion-group-hs9p                             Starting kube-proxy.\ndefault                      4m54s       Warning   KubeletStart              node/bootstrap-e2e-minion-group-hs9p                             Started Kubernetes kubelet.\ndefault                      4m47s       Normal    NodeReady                 node/bootstrap-e2e-minion-group-hs9p                             Node bootstrap-e2e-minion-group-hs9p status is now: NodeReady\ndefault                      4m56s       Normal    Starting                  node/bootstrap-e2e-minion-group-l1kf                             Starting kubelet.\ndefault                      4m56s       Normal    NodeHasSufficientMemory   node/bootstrap-e2e-minion-group-l1kf                             Node bootstrap-e2e-minion-group-l1kf status is now: NodeHasSufficientMemory\ndefault                      4m56s       Normal    NodeHasNoDiskPressure     node/bootstrap-e2e-minion-group-l1kf                             Node bootstrap-e2e-minion-group-l1kf status is now: NodeHasNoDiskPressure\ndefault                      4m56s       Normal    NodeHasSufficientPID      node/bootstrap-e2e-minion-group-l1kf                             Node bootstrap-e2e-minion-group-l1kf status is now: NodeHasSufficientPID\ndefault                      4m56s       Normal    NodeAllocatableEnforced   node/bootstrap-e2e-minion-group-l1kf                             Updated Node Allocatable limit across pods\ndefault                      4m55s       Normal    NodeReady                 node/bootstrap-e2e-minion-group-l1kf                             Node bootstrap-e2e-minion-group-l1kf status is now: NodeReady\ndefault                      4m54s       Normal    Starting                  node/bootstrap-e2e-minion-group-l1kf                             Starting kube-proxy.\ndefault                      4m53s       Warning   ContainerdStart           node/bootstrap-e2e-minion-group-l1kf                             Starting containerd container runtime...\ndefault                      4m53s       Warning   DockerStart               node/bootstrap-e2e-minion-group-l1kf                             Starting Docker Application Container Engine...\ndefault                      4m53s       Warning   KubeletStart              node/bootstrap-e2e-minion-group-l1kf                             Started Kubernetes kubelet.\ndefault                      4m52s       Normal    RegisteredNode            node/bootstrap-e2e-minion-group-l1kf                             Node bootstrap-e2e-minion-group-l1kf event: Registered Node bootstrap-e2e-minion-group-l1kf in Controller\ndefault                      4m55s       Normal    Starting                  node/bootstrap-e2e-minion-group-mp1q                             Starting kubelet.\ndefault                      4m55s       Normal    NodeHasSufficientMemory   node/bootstrap-e2e-minion-group-mp1q                             Node bootstrap-e2e-minion-group-mp1q status is now: NodeHasSufficientMemory\ndefault                      4m55s       Normal    NodeHasNoDiskPressure     node/bootstrap-e2e-minion-group-mp1q                             Node bootstrap-e2e-minion-group-mp1q status is now: NodeHasNoDiskPressure\ndefault                      4m55s       Normal    NodeHasSufficientPID      node/bootstrap-e2e-minion-group-mp1q                             Node bootstrap-e2e-minion-group-mp1q status is now: NodeHasSufficientPID\ndefault                      4m55s       Normal    NodeAllocatableEnforced   node/bootstrap-e2e-minion-group-mp1q                             Updated Node Allocatable limit across pods\ndefault                      4m54s       Normal    NodeReady                 node/bootstrap-e2e-minion-group-mp1q                             Node bootstrap-e2e-minion-group-mp1q status is now: NodeReady\ndefault                      4m53s       Warning   ContainerdStart           node/bootstrap-e2e-minion-group-mp1q                             Starting containerd container runtime...\ndefault                      4m53s       Warning   DockerStart               node/bootstrap-e2e-minion-group-mp1q                             Starting Docker Application Container Engine...\ndefault                      4m53s       Warning   KubeletStart              node/bootstrap-e2e-minion-group-mp1q                             Started Kubernetes kubelet.\ndefault                      4m53s       Normal    Starting                  node/bootstrap-e2e-minion-group-mp1q                             Starting kube-proxy.\ndefault                      4m52s       Normal    RegisteredNode            node/bootstrap-e2e-minion-group-mp1q                             Node bootstrap-e2e-minion-group-mp1q event: Registered Node bootstrap-e2e-minion-group-mp1q in Controller\ndefault                      22s         Normal    VolumeDelete              persistentvolume/pvc-80fda8bf-5e35-4cdc-8b3b-869c14208e5d        googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-80fda8bf-5e35-4cdc-8b3b-869c14208e5d' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-hs9p', resourceInUseByAnotherResource\ndefault                      54s         Normal    VolumeDelete              persistentvolume/pvc-cc1af00c-efdc-48d1-a7e3-768ed69fd2d7        googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-cc1af00c-efdc-48d1-a7e3-768ed69fd2d7' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-l1kf', resourceInUseByAnotherResource\ndefault                      48s         Normal    VolumeDelete              persistentvolume/pvc-d33f4f03-546d-4c84-b6e5-b1f7c6e4e55d        googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-d33f4f03-546d-4c84-b6e5-b1f7c6e4e55d' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-hs9p', resourceInUseByAnotherResource\ndeployment-8681              29s         Normal    Scheduled                 pod/test-rolling-update-controller-w7fxq                         Successfully assigned deployment-8681/test-rolling-update-controller-w7fxq to bootstrap-e2e-minion-group-hs9p\ndeployment-8681              28s         Warning   FailedMount               pod/test-rolling-update-controller-w7fxq                         MountVolume.SetUp failed for volume \"default-token-4h25c\" : failed to sync secret cache: timed out waiting for the condition\ndeployment-8681              24s         Normal    Pulled                    pod/test-rolling-update-controller-w7fxq                         Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-8681              23s         Normal    Created                   pod/test-rolling-update-controller-w7fxq                         Created container httpd\ndeployment-8681              22s         Normal    Started                   pod/test-rolling-update-controller-w7fxq                         Started container httpd\ndeployment-8681              11s         Normal    Killing                   pod/test-rolling-update-controller-w7fxq                         Stopping container httpd\ndeployment-8681              29s         Normal    SuccessfulCreate          replicaset/test-rolling-update-controller                        Created pod: test-rolling-update-controller-w7fxq\ndeployment-8681              11s         Normal    SuccessfulDelete          replicaset/test-rolling-update-controller                        Deleted pod: test-rolling-update-controller-w7fxq\ndeployment-8681              17s         Normal    Scheduled                 pod/test-rolling-update-deployment-67cf4f6444-7trmk              Successfully assigned deployment-8681/test-rolling-update-deployment-67cf4f6444-7trmk to bootstrap-e2e-minion-group-cksd\ndeployment-8681              17s         Normal    Pulled                    pod/test-rolling-update-deployment-67cf4f6444-7trmk              Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ndeployment-8681              16s         Normal    Created                   pod/test-rolling-update-deployment-67cf4f6444-7trmk              Created container agnhost\ndeployment-8681              16s         Normal    Started                   pod/test-rolling-update-deployment-67cf4f6444-7trmk              Started container agnhost\ndeployment-8681              18s         Normal    SuccessfulCreate          replicaset/test-rolling-update-deployment-67cf4f6444             Created pod: test-rolling-update-deployment-67cf4f6444-7trmk\ndeployment-8681              18s         Normal    ScalingReplicaSet         deployment/test-rolling-update-deployment                        Scaled up replica set test-rolling-update-deployment-67cf4f6444 to 1\ndeployment-8681              11s         Normal    ScalingReplicaSet         deployment/test-rolling-update-deployment                        Scaled down replica set test-rolling-update-controller to 0\ndisruption-2144              21s         Normal    Scheduled                 pod/pod-0                                                        Successfully assigned disruption-2144/pod-0 to bootstrap-e2e-minion-group-l1kf\ndisruption-2144              18s         Normal    Pulling                   pod/pod-0                                                        Pulling image \"gcr.io/kubernetes-e2e-test-images/echoserver:2.2\"\ndisruption-2144              9s          Normal    Pulled                    pod/pod-0                                                        Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/echoserver:2.2\"\ndisruption-2144              9s          Normal    Created                   pod/pod-0                                                        Created container busybox\ndisruption-2144              8s          Normal    Started                   pod/pod-0                                                        Started container busybox\ndisruption-2144              4s          Normal    Killing                   pod/pod-0                                                        Stopping container busybox\ndisruption-3450              9s          Normal    NoPods                    poddisruptionbudget/foo                                          No matching pods found\ngc-4013                      10s         Normal    Scheduled                 pod/simpletest.deployment-7ccb84659c-8r2zq                       Successfully assigned gc-4013/simpletest.deployment-7ccb84659c-8r2zq to bootstrap-e2e-minion-group-cksd\ngc-4013                      8s          Normal    Pulled                    pod/simpletest.deployment-7ccb84659c-8r2zq                       Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-4013                      8s          Normal    Created                   pod/simpletest.deployment-7ccb84659c-8r2zq                       Created container nginx\ngc-4013                      7s          Normal    Started                   pod/simpletest.deployment-7ccb84659c-8r2zq                       Started container nginx\ngc-4013                      10s         Normal    Scheduled                 pod/simpletest.deployment-7ccb84659c-nk26k                       Successfully assigned gc-4013/simpletest.deployment-7ccb84659c-nk26k to bootstrap-e2e-minion-group-cksd\ngc-4013                      8s          Normal    Pulled                    pod/simpletest.deployment-7ccb84659c-nk26k                       Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-4013                      8s          Normal    Created                   pod/simpletest.deployment-7ccb84659c-nk26k                       Created container nginx\ngc-4013                      7s          Normal    Started                   pod/simpletest.deployment-7ccb84659c-nk26k                       Started container nginx\ngc-4013                      10s         Normal    SuccessfulCreate          replicaset/simpletest.deployment-7ccb84659c                      Created pod: simpletest.deployment-7ccb84659c-nk26k\ngc-4013                      10s         Normal    SuccessfulCreate          replicaset/simpletest.deployment-7ccb84659c                      Created pod: simpletest.deployment-7ccb84659c-8r2zq\ngc-4013                      11s         Normal    ScalingReplicaSet         deployment/simpletest.deployment                                 Scaled up replica set simpletest.deployment-7ccb84659c to 2\ngcp-volume-9546              5s          Normal    Scheduled                 pod/gluster-server                                               Successfully assigned gcp-volume-9546/gluster-server to bootstrap-e2e-minion-group-hs9p\ngcp-volume-9546              3s          Normal    Pulled                    pod/gluster-server                                               Container image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\" already present on machine\ngcp-volume-9546              3s          Normal    Created                   pod/gluster-server                                               Created container gluster-server\ngcp-volume-9546              2s          Normal    Started                   pod/gluster-server                                               Started container gluster-server\njob-2740                     1s          Normal    Pulled                    pod/fail-once-non-local-vqp68                                    Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-2740                     1s          Normal    Created                   pod/fail-once-non-local-vqp68                                    Created container c\njob-2740                     4s          Normal    SuccessfulCreate          job/fail-once-non-local                                          Created pod: fail-once-non-local-fbpdc\njob-2740                     4s          Normal    SuccessfulCreate          job/fail-once-non-local                                          Created pod: fail-once-non-local-vqp68\njob-3474                     25s         Normal    Scheduled                 pod/fail-once-local-lk8nv                                        Successfully assigned job-3474/fail-once-local-lk8nv to bootstrap-e2e-minion-group-l1kf\njob-3474                     17s         Normal    Pulled                    pod/fail-once-local-lk8nv                                        Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-3474                     17s         Normal    Created                   pod/fail-once-local-lk8nv                                        Created container c\njob-3474                     16s         Normal    Started                   pod/fail-once-local-lk8nv                                        Started container c\njob-3474                     13s         Normal    SandboxChanged            pod/fail-once-local-lk8nv                                        Pod sandbox changed, it will be killed and re-created.\njob-3474                     12s         Warning   FailedCreatePodSandBox    pod/fail-once-local-lk8nv                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod \"fail-once-local-lk8nv\": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused \"process_linux.go:430: container init caused \\\"process_linux.go:413: running prestart hook 0 caused \\\\\\\"error running hook: exit status 1, stdout: , stderr: time=\\\\\\\\\\\\\\\"2020-01-17T13:34:25Z\\\\\\\\\\\\\\\" level=fatal msg=\\\\\\\\\\\\\\\"no such file or directory\\\\\\\\\\\\\\\"\\\\\\\\n\\\\\\\"\\\"\": unknown\njob-3474                     36s         Normal    Scheduled                 pod/fail-once-local-qxkn4                                        Successfully assigned job-3474/fail-once-local-qxkn4 to bootstrap-e2e-minion-group-l1kf\njob-3474                     28s         Normal    Pulled                    pod/fail-once-local-qxkn4                                        Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-3474                     28s         Normal    Created                   pod/fail-once-local-qxkn4                                        Created container c\njob-3474                     27s         Normal    Started                   pod/fail-once-local-qxkn4                                        Started container c\njob-3474                     26s         Normal    Scheduled                 pod/fail-once-local-rq2cm                                        Successfully assigned job-3474/fail-once-local-rq2cm to bootstrap-e2e-minion-group-l1kf\njob-3474                     21s         Normal    Pulled                    pod/fail-once-local-rq2cm                                        Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-3474                     21s         Normal    Created                   pod/fail-once-local-rq2cm                                        Created container c\njob-3474                     19s         Normal    Started                   pod/fail-once-local-rq2cm                                        Started container c\njob-3474                     36s         Normal    Scheduled                 pod/fail-once-local-v7mzs                                        Successfully assigned job-3474/fail-once-local-v7mzs to bootstrap-e2e-minion-group-l1kf\njob-3474                     29s         Normal    Pulled                    pod/fail-once-local-v7mzs                                        Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-3474                     29s         Normal    Created                   pod/fail-once-local-v7mzs                                        Created container c\njob-3474                     28s         Normal    Started                   pod/fail-once-local-v7mzs                                        Started container c\njob-3474                     27s         Normal    SandboxChanged            pod/fail-once-local-v7mzs                                        Pod sandbox changed, it will be killed and re-created.\njob-3474                     25s         Warning   FailedCreatePodSandBox    pod/fail-once-local-v7mzs                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod \"fail-once-local-v7mzs\": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused \"process_linux.go:430: container init caused \\\"process_linux.go:413: running prestart hook 0 caused \\\\\\\"error running hook: exit status 1, stdout: , stderr: time=\\\\\\\\\\\\\\\"2020-01-17T13:34:12Z\\\\\\\\\\\\\\\" level=fatal msg=\\\\\\\\\\\\\\\"no such file or directory\\\\\\\\\\\\\\\"\\\\\\\\n\\\\\\\"\\\"\": unknown\njob-3474                     36s         Normal    SuccessfulCreate          job/fail-once-local                                              Created pod: fail-once-local-v7mzs\njob-3474                     36s         Normal    SuccessfulCreate          job/fail-once-local                                              Created pod: fail-once-local-qxkn4\njob-3474                     27s         Normal    SuccessfulCreate          job/fail-once-local                                              Created pod: fail-once-local-rq2cm\njob-3474                     25s         Normal    SuccessfulCreate          job/fail-once-local                                              Created pod: fail-once-local-lk8nv\njob-3474                     14s         Normal    Completed                 job/fail-once-local                                              Job completed\nkube-system                  4m33s       Normal    Scheduled                 pod/coredns-65567c7b57-sbrn5                                     Successfully assigned kube-system/coredns-65567c7b57-sbrn5 to bootstrap-e2e-minion-group-cksd\nkube-system                  4m32s       Normal    Pulling                   pod/coredns-65567c7b57-sbrn5                                     Pulling image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                  4m30s       Normal    Pulled                    pod/coredns-65567c7b57-sbrn5                                     Successfully pulled image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                  4m30s       Normal    Created                   pod/coredns-65567c7b57-sbrn5                                     Created container coredns\nkube-system                  4m30s       Normal    Started                   pod/coredns-65567c7b57-sbrn5                                     Started container coredns\nkube-system                  5m4s        Warning   FailedScheduling          pod/coredns-65567c7b57-vgx2l                                     no nodes available to schedule pods\nkube-system                  4m58s       Warning   FailedScheduling          pod/coredns-65567c7b57-vgx2l                                     0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                  4m49s       Warning   FailedScheduling          pod/coredns-65567c7b57-vgx2l                                     0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                  4m40s       Normal    Scheduled                 pod/coredns-65567c7b57-vgx2l                                     Successfully assigned kube-system/coredns-65567c7b57-vgx2l to bootstrap-e2e-minion-group-l1kf\nkube-system                  4m39s       Normal    Pulling                   pod/coredns-65567c7b57-vgx2l                                     Pulling image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                  4m37s       Normal    Pulled                    pod/coredns-65567c7b57-vgx2l                                     Successfully pulled image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                  4m37s       Normal    Created                   pod/coredns-65567c7b57-vgx2l                                     Created container coredns\nkube-system                  4m37s       Normal    Started                   pod/coredns-65567c7b57-vgx2l                                     Started container coredns\nkube-system                  5m9s        Warning   FailedCreate              replicaset/coredns-65567c7b57                                    Error creating: pods \"coredns-65567c7b57-\" is forbidden: no providers available to validate pod request\nkube-system                  5m6s        Warning   FailedCreate              replicaset/coredns-65567c7b57                                    Error creating: pods \"coredns-65567c7b57-\" is forbidden: unable to validate against any pod security policy: []\nkube-system                  5m4s        Normal    SuccessfulCreate          replicaset/coredns-65567c7b57                                    Created pod: coredns-65567c7b57-vgx2l\nkube-system                  4m34s       Normal    SuccessfulCreate          replicaset/coredns-65567c7b57                                    Created pod: coredns-65567c7b57-sbrn5\nkube-system                  5m9s        Normal    ScalingReplicaSet         deployment/coredns                                               Scaled up replica set coredns-65567c7b57 to 1\nkube-system                  4m34s       Normal    ScalingReplicaSet         deployment/coredns                                               Scaled up replica set coredns-65567c7b57 to 2\nkube-system                  5m6s        Warning   FailedScheduling          pod/event-exporter-v0.3.1-747b47fcd-757kq                        no nodes available to schedule pods\nkube-system                  4m46s       Warning   FailedScheduling          pod/event-exporter-v0.3.1-747b47fcd-757kq                        0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                  4m43s       Normal    Scheduled                 pod/event-exporter-v0.3.1-747b47fcd-757kq                        Successfully assigned kube-system/event-exporter-v0.3.1-747b47fcd-757kq to bootstrap-e2e-minion-group-hs9p\nkube-system                  4m41s       Normal    Pulling                   pod/event-exporter-v0.3.1-747b47fcd-757kq                        Pulling image \"k8s.gcr.io/event-exporter:v0.3.1\"\nkube-system                  4m39s       Normal    Pulled                    pod/event-exporter-v0.3.1-747b47fcd-757kq                        Successfully pulled image \"k8s.gcr.io/event-exporter:v0.3.1\"\nkube-system                  4m39s       Normal    Created                   pod/event-exporter-v0.3.1-747b47fcd-757kq                        Created container event-exporter\nkube-system                  4m38s       Normal    Started                   pod/event-exporter-v0.3.1-747b47fcd-757kq                        Started container event-exporter\nkube-system                  4m38s       Normal    Pulling                   pod/event-exporter-v0.3.1-747b47fcd-757kq                        Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.7.2\"\nkube-system                  4m37s       Normal    Pulled                    pod/event-exporter-v0.3.1-747b47fcd-757kq                        Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.7.2\"\nkube-system                  4m36s       Normal    Created                   pod/event-exporter-v0.3.1-747b47fcd-757kq                        Created container prometheus-to-sd-exporter\nkube-system                  4m36s       Normal    Started                   pod/event-exporter-v0.3.1-747b47fcd-757kq                        Started container prometheus-to-sd-exporter\nkube-system                  5m9s        Normal    SuccessfulCreate          replicaset/event-exporter-v0.3.1-747b47fcd                       Created pod: event-exporter-v0.3.1-747b47fcd-757kq\nkube-system                  5m9s        Normal    ScalingReplicaSet         deployment/event-exporter-v0.3.1                                 Scaled up replica set event-exporter-v0.3.1-747b47fcd to 1\nkube-system                  5m2s        Warning   FailedScheduling          pod/fluentd-gcp-scaler-76d9c77b4d-v7lw9                          no nodes available to schedule pods\nkube-system                  4m46s       Warning   FailedScheduling          pod/fluentd-gcp-scaler-76d9c77b4d-v7lw9                          0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                  4m43s       Normal    Scheduled                 pod/fluentd-gcp-scaler-76d9c77b4d-v7lw9                          Successfully assigned kube-system/fluentd-gcp-scaler-76d9c77b4d-v7lw9 to bootstrap-e2e-minion-group-mp1q\nkube-system                  4m41s       Normal    Pulling                   pod/fluentd-gcp-scaler-76d9c77b4d-v7lw9                          Pulling image \"k8s.gcr.io/fluentd-gcp-scaler:0.5.2\"\nkube-system                  4m35s       Normal    Pulled                    pod/fluentd-gcp-scaler-76d9c77b4d-v7lw9                          Successfully pulled image \"k8s.gcr.io/fluentd-gcp-scaler:0.5.2\"\nkube-system                  4m34s       Normal    Created                   pod/fluentd-gcp-scaler-76d9c77b4d-v7lw9                          Created container fluentd-gcp-scaler\nkube-system                  4m34s       Normal    Started                   pod/fluentd-gcp-scaler-76d9c77b4d-v7lw9                          Started container fluentd-gcp-scaler\nkube-system                  5m2s        Normal    SuccessfulCreate          replicaset/fluentd-gcp-scaler-76d9c77b4d                         Created pod: fluentd-gcp-scaler-76d9c77b4d-v7lw9\nkube-system                  5m2s        Normal    ScalingReplicaSet         deployment/fluentd-gcp-scaler                                    Scaled up replica set fluentd-gcp-scaler-76d9c77b4d to 1\nkube-system                  4m          Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-2j564                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-2j564 to bootstrap-e2e-minion-group-l1kf\nkube-system                  3m59s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-2j564                                     Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                  3m59s       Normal    Created                   pod/fluentd-gcp-v3.2.0-2j564                                     Created container fluentd-gcp\nkube-system                  3m59s       Normal    Started                   pod/fluentd-gcp-v3.2.0-2j564                                     Started container fluentd-gcp\nkube-system                  3m59s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-2j564                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                  3m59s       Normal    Created                   pod/fluentd-gcp-v3.2.0-2j564                                     Created container prometheus-to-sd-exporter\nkube-system                  3m58s       Normal    Started                   pod/fluentd-gcp-v3.2.0-2j564                                     Started container prometheus-to-sd-exporter\nkube-system                  4m55s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-6q62x                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-6q62x to bootstrap-e2e-minion-group-cksd\nkube-system                  4m54s       Warning   FailedMount               pod/fluentd-gcp-v3.2.0-6q62x                                     MountVolume.SetUp failed for volume \"fluentd-gcp-token-4mg77\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                  4m54s       Warning   FailedMount               pod/fluentd-gcp-v3.2.0-6q62x                                     MountVolume.SetUp failed for volume \"config-volume\" : failed to sync configmap cache: timed out waiting for the condition\nkube-system                  4m53s       Normal    Pulling                   pod/fluentd-gcp-v3.2.0-6q62x                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                  4m44s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-6q62x                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                  4m44s       Normal    Created                   pod/fluentd-gcp-v3.2.0-6q62x                                     Created container fluentd-gcp\nkube-system                  4m43s       Normal    Started                   pod/fluentd-gcp-v3.2.0-6q62x                                     Started container fluentd-gcp\nkube-system                  4m43s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-6q62x                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                  4m43s       Normal    Created                   pod/fluentd-gcp-v3.2.0-6q62x                                     Created container prometheus-to-sd-exporter\nkube-system                  4m43s       Normal    Started                   pod/fluentd-gcp-v3.2.0-6q62x                                     Started container prometheus-to-sd-exporter\nkube-system                  3m46s       Normal    Killing                   pod/fluentd-gcp-v3.2.0-6q62x                                     Stopping container fluentd-gcp\nkube-system                  3m46s       Normal    Killing                   pod/fluentd-gcp-v3.2.0-6q62x                                     Stopping container prometheus-to-sd-exporter\nkube-system                  3m47s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-cgd45                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-cgd45 to bootstrap-e2e-minion-group-hs9p\nkube-system                  3m47s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-cgd45                                     Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                  3m47s       Normal    Created                   pod/fluentd-gcp-v3.2.0-cgd45                                     Created container fluentd-gcp\nkube-system                  3m46s       Normal    Started                   pod/fluentd-gcp-v3.2.0-cgd45                                     Started container fluentd-gcp\nkube-system                  3m46s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-cgd45                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                  3m46s       Normal    Created                   pod/fluentd-gcp-v3.2.0-cgd45                                     Created container prometheus-to-sd-exporter\nkube-system                  3m46s       Normal    Started                   pod/fluentd-gcp-v3.2.0-cgd45                                     Started container prometheus-to-sd-exporter\nkube-system                  4m5s        Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-kr7d8                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-kr7d8 to bootstrap-e2e-minion-group-mp1q\nkube-system                  4m5s        Normal    Pulled                    pod/fluentd-gcp-v3.2.0-kr7d8                                     Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                  4m4s        Normal    Created                   pod/fluentd-gcp-v3.2.0-kr7d8                                     Created container fluentd-gcp\nkube-system                  4m4s        Normal    Started                   pod/fluentd-gcp-v3.2.0-kr7d8                                     Started container fluentd-gcp\nkube-system                  4m4s        Normal    Pulled                    pod/fluentd-gcp-v3.2.0-kr7d8                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                  4m4s        Normal    Created                   pod/fluentd-gcp-v3.2.0-kr7d8                                     Created container prometheus-to-sd-exporter\nkube-system                  4m4s        Normal    Started                   pod/fluentd-gcp-v3.2.0-kr7d8                                     Started container prometheus-to-sd-exporter\nkube-system                  4m56s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-pxfq4                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-pxfq4 to bootstrap-e2e-minion-group-hs9p\nkube-system                  4m55s       Normal    Pulling                   pod/fluentd-gcp-v3.2.0-pxfq4                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                  4m44s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-pxfq4                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                  4m44s       Normal    Created                   pod/fluentd-gcp-v3.2.0-pxfq4                                     Created container fluentd-gcp\nkube-system                  4m44s       Normal    Started                   pod/fluentd-gcp-v3.2.0-pxfq4                                     Started container fluentd-gcp\nkube-system                  4m44s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-pxfq4                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                  4m44s       Normal    Created                   pod/fluentd-gcp-v3.2.0-pxfq4                                     Created container prometheus-to-sd-exporter\nkube-system                  4m44s       Normal    Started                   pod/fluentd-gcp-v3.2.0-pxfq4                                     Started container prometheus-to-sd-exporter\nkube-system                  3m58s       Normal    Killing                   pod/fluentd-gcp-v3.2.0-pxfq4                                     Stopping container fluentd-gcp\nkube-system                  3m58s       Normal    Killing                   pod/fluentd-gcp-v3.2.0-pxfq4                                     Stopping container prometheus-to-sd-exporter\nkube-system                  3m37s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-tqmf5                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-tqmf5 to bootstrap-e2e-minion-group-cksd\nkube-system                  3m36s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-tqmf5                                     Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                  3m36s       Normal    Created                   pod/fluentd-gcp-v3.2.0-tqmf5                                     Created container fluentd-gcp\nkube-system                  3m36s       Normal    Started                   pod/fluentd-gcp-v3.2.0-tqmf5                                     Started container fluentd-gcp\nkube-system                  3m36s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-tqmf5                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                  3m36s       Normal    Created                   pod/fluentd-gcp-v3.2.0-tqmf5                                     Created container prometheus-to-sd-exporter\nkube-system                  3m35s       Normal    Started                   pod/fluentd-gcp-v3.2.0-tqmf5                                     Started container prometheus-to-sd-exporter\nkube-system                  4m54s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-wdzg7                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-wdzg7 to bootstrap-e2e-minion-group-l1kf\nkube-system                  4m53s       Normal    Pulling                   pod/fluentd-gcp-v3.2.0-wdzg7                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                  4m44s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-wdzg7                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                  4m44s       Normal    Created                   pod/fluentd-gcp-v3.2.0-wdzg7                                     Created container fluentd-gcp\nkube-system                  4m43s       Normal    Started                   pod/fluentd-gcp-v3.2.0-wdzg7                                     Started container fluentd-gcp\nkube-system                  4m43s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-wdzg7                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                  4m43s       Normal    Created                   pod/fluentd-gcp-v3.2.0-wdzg7                                     Created container prometheus-to-sd-exporter\nkube-system                  4m43s       Normal    Started                   pod/fluentd-gcp-v3.2.0-wdzg7                                     Started container prometheus-to-sd-exporter\nkube-system                  4m4s        Normal    Killing                   pod/fluentd-gcp-v3.2.0-wdzg7                                     Stopping container fluentd-gcp\nkube-system                  4m4s        Normal    Killing                   pod/fluentd-gcp-v3.2.0-wdzg7                                     Stopping container prometheus-to-sd-exporter\nkube-system                  4m57s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-wxzbs                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-wxzbs to bootstrap-e2e-master\nkube-system                  4m50s       Normal    Pulling                   pod/fluentd-gcp-v3.2.0-wxzbs                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                  4m30s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-wxzbs                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                  4m28s       Normal    Created                   pod/fluentd-gcp-v3.2.0-wxzbs                                     Created container fluentd-gcp\nkube-system                  4m28s       Warning   Failed                    pod/fluentd-gcp-v3.2.0-wxzbs                                     Error: failed to start container \"fluentd-gcp\": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused \"process_linux.go:430: container init caused \\\"rootfs_linux.go:58: mounting \\\\\\\"/var/lib/kubelet/pods/d6ca37cc-405d-4a79-a6b9-ed5a5527bb94/volumes/kubernetes.io~configmap/config-volume\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/overlay2/aef5e712b104a3b370ec2a08cec85628d088a1105518e800124113c69ab128b0/merged\\\\\\\" at \\\\\\\"/etc/google-fluentd/config.d\\\\\\\" caused \\\\\\\"stat /var/lib/kubelet/pods/d6ca37cc-405d-4a79-a6b9-ed5a5527bb94/volumes/kubernetes.io~configmap/config-volume: no such file or directory\\\\\\\"\\\"\": unknown\nkube-system                  4m28s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-wxzbs                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                  4m28s       Warning   Failed                    pod/fluentd-gcp-v3.2.0-wxzbs                                     Error: cannot find volume \"fluentd-gcp-token-4mg77\" to mount into container \"prometheus-to-sd-exporter\"\nkube-system                  2m25s       Warning   FailedMount               pod/fluentd-gcp-v3.2.0-wxzbs                                     Unable to attach or mount volumes: unmounted volumes=[varlog varlibdockercontainers config-volume fluentd-gcp-token-4mg77], unattached volumes=[varlog varlibdockercontainers config-volume fluentd-gcp-token-4mg77]: timed out waiting for the condition\nkube-system                  4m53s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-z6wfx                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-z6wfx to bootstrap-e2e-minion-group-mp1q\nkube-system                  4m52s       Normal    Pulling                   pod/fluentd-gcp-v3.2.0-z6wfx                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                  4m42s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-z6wfx                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                  4m42s       Normal    Created                   pod/fluentd-gcp-v3.2.0-z6wfx                                     Created container fluentd-gcp\nkube-system                  4m41s       Normal    Started                   pod/fluentd-gcp-v3.2.0-z6wfx                                     Started container fluentd-gcp\nkube-system                  4m41s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-z6wfx                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                  4m41s       Normal    Created                   pod/fluentd-gcp-v3.2.0-z6wfx                                     Created container prometheus-to-sd-exporter\nkube-system                  4m41s       Normal    Started                   pod/fluentd-gcp-v3.2.0-z6wfx                                     Started container prometheus-to-sd-exporter\nkube-system                  4m17s       Normal    Killing                   pod/fluentd-gcp-v3.2.0-z6wfx                                     Stopping container fluentd-gcp\nkube-system                  4m17s       Normal    Killing                   pod/fluentd-gcp-v3.2.0-z6wfx                                     Stopping container prometheus-to-sd-exporter\nkube-system                  4m26s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-zkr7k                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-zkr7k to bootstrap-e2e-master\nkube-system                  4m25s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-zkr7k                                     Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                  4m25s       Normal    Created                   pod/fluentd-gcp-v3.2.0-zkr7k                                     Created container fluentd-gcp\nkube-system                  4m24s       Normal    Started                   pod/fluentd-gcp-v3.2.0-zkr7k                                     Started container fluentd-gcp\nkube-system                  4m24s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-zkr7k                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                  4m24s       Normal    Created                   pod/fluentd-gcp-v3.2.0-zkr7k                                     Created container prometheus-to-sd-exporter\nkube-system                  4m18s       Normal    Started                   pod/fluentd-gcp-v3.2.0-zkr7k                                     Started container prometheus-to-sd-exporter\nkube-system                  4m57s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-wxzbs\nkube-system                  4m57s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-pxfq4\nkube-system                  4m56s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-6q62x\nkube-system                  4m55s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-wdzg7\nkube-system                  4m54s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-z6wfx\nkube-system                  4m28s       Normal    SuccessfulDelete          daemonset/fluentd-gcp-v3.2.0                                     Deleted pod: fluentd-gcp-v3.2.0-wxzbs\nkube-system                  4m26s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-zkr7k\nkube-system                  4m17s       Normal    SuccessfulDelete          daemonset/fluentd-gcp-v3.2.0                                     Deleted pod: fluentd-gcp-v3.2.0-z6wfx\nkube-system                  4m5s        Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-kr7d8\nkube-system                  4m4s        Normal    SuccessfulDelete          daemonset/fluentd-gcp-v3.2.0                                     Deleted pod: fluentd-gcp-v3.2.0-wdzg7\nkube-system                  4m          Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-2j564\nkube-system                  3m58s       Normal    SuccessfulDelete          daemonset/fluentd-gcp-v3.2.0                                     Deleted pod: fluentd-gcp-v3.2.0-pxfq4\nkube-system                  3m47s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-cgd45\nkube-system                  3m46s       Normal    SuccessfulDelete          daemonset/fluentd-gcp-v3.2.0                                     Deleted pod: fluentd-gcp-v3.2.0-6q62x\nkube-system                  3m37s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                     (combined from similar events): Created pod: fluentd-gcp-v3.2.0-tqmf5\nkube-system                  4m47s       Normal    LeaderElection            configmap/ingress-gce-lock                                       bootstrap-e2e-master_707d7 became leader\nkube-system                  5m28s       Normal    LeaderElection            endpoints/kube-controller-manager                                bootstrap-e2e-master_1f41b409-083b-4f59-9fa4-872a8b500782 became leader\nkube-system                  5m28s       Normal    LeaderElection            lease/kube-controller-manager                                    bootstrap-e2e-master_1f41b409-083b-4f59-9fa4-872a8b500782 became leader\nkube-system                  4m58s       Warning   FailedScheduling          pod/kube-dns-autoscaler-65bc6d4889-ml5rx                         no nodes available to schedule pods\nkube-system                  4m56s       Warning   FailedScheduling          pod/kube-dns-autoscaler-65bc6d4889-ml5rx                         0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) had taints that the pod didn't tolerate.\nkube-system                  4m48s       Warning   FailedScheduling          pod/kube-dns-autoscaler-65bc6d4889-ml5rx                         0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                  4m39s       Normal    Scheduled                 pod/kube-dns-autoscaler-65bc6d4889-ml5rx                         Successfully assigned kube-system/kube-dns-autoscaler-65bc6d4889-ml5rx to bootstrap-e2e-minion-group-cksd\nkube-system                  4m38s       Normal    Pulling                   pod/kube-dns-autoscaler-65bc6d4889-ml5rx                         Pulling image \"k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1\"\nkube-system                  4m36s       Normal    Pulled                    pod/kube-dns-autoscaler-65bc6d4889-ml5rx                         Successfully pulled image \"k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1\"\nkube-system                  4m35s       Normal    Created                   pod/kube-dns-autoscaler-65bc6d4889-ml5rx                         Created container autoscaler\nkube-system                  4m35s       Normal    Started                   pod/kube-dns-autoscaler-65bc6d4889-ml5rx                         Started container autoscaler\nkube-system                  5m3s        Warning   FailedCreate              replicaset/kube-dns-autoscaler-65bc6d4889                        Error creating: pods \"kube-dns-autoscaler-65bc6d4889-\" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount \"kube-dns-autoscaler\" not found\nkube-system                  4m58s       Normal    SuccessfulCreate          replicaset/kube-dns-autoscaler-65bc6d4889                        Created pod: kube-dns-autoscaler-65bc6d4889-ml5rx\nkube-system                  5m9s        Normal    ScalingReplicaSet         deployment/kube-dns-autoscaler                                   Scaled up replica set kube-dns-autoscaler-65bc6d4889 to 1\nkube-system                  4m55s       Normal    Pulled                    pod/kube-proxy-bootstrap-e2e-minion-group-cksd                   Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.854_6278df2a972d2c\" already present on machine\nkube-system                  4m55s       Normal    Created                   pod/kube-proxy-bootstrap-e2e-minion-group-cksd                   Created container kube-proxy\nkube-system                  4m55s       Normal    Started                   pod/kube-proxy-bootstrap-e2e-minion-group-cksd                   Started container kube-proxy\nkube-system                  4m55s       Normal    Pulled                    pod/kube-proxy-bootstrap-e2e-minion-group-hs9p                   Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.854_6278df2a972d2c\" already present on machine\nkube-system                  4m55s       Normal    Created                   pod/kube-proxy-bootstrap-e2e-minion-group-hs9p                   Created container kube-proxy\nkube-system                  4m55s       Normal    Started                   pod/kube-proxy-bootstrap-e2e-minion-group-hs9p                   Started container kube-proxy\nkube-system                  4m54s       Normal    Pulled                    pod/kube-proxy-bootstrap-e2e-minion-group-l1kf                   Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.854_6278df2a972d2c\" already present on machine\nkube-system                  4m54s       Normal    Created                   pod/kube-proxy-bootstrap-e2e-minion-group-l1kf                   Created container kube-proxy\nkube-system                  4m54s       Normal    Started                   pod/kube-proxy-bootstrap-e2e-minion-group-l1kf                   Started container kube-proxy\nkube-system                  4m54s       Normal    Pulled                    pod/kube-proxy-bootstrap-e2e-minion-group-mp1q                   Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.854_6278df2a972d2c\" already present on machine\nkube-system                  4m54s       Normal    Created                   pod/kube-proxy-bootstrap-e2e-minion-group-mp1q                   Created container kube-proxy\nkube-system                  4m53s       Normal    Started                   pod/kube-proxy-bootstrap-e2e-minion-group-mp1q                   Started container kube-proxy\nkube-system                  5m31s       Normal    LeaderElection            endpoints/kube-scheduler                                         bootstrap-e2e-master_02d65249-3a22-48c1-916c-fed1fcef458e became leader\nkube-system                  5m31s       Normal    LeaderElection            lease/kube-scheduler                                             bootstrap-e2e-master_02d65249-3a22-48c1-916c-fed1fcef458e became leader\nkube-system                  5m2s        Warning   FailedScheduling          pod/kubernetes-dashboard-7778f8b456-tkqpc                        no nodes available to schedule pods\nkube-system                  4m57s       Warning   FailedScheduling          pod/kubernetes-dashboard-7778f8b456-tkqpc                        0/2 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 1 node(s) were unschedulable.\nkube-system                  4m48s       Warning   FailedScheduling          pod/kubernetes-dashboard-7778f8b456-tkqpc                        0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                  4m39s       Normal    Scheduled                 pod/kubernetes-dashboard-7778f8b456-tkqpc                        Successfully assigned kube-system/kubernetes-dashboard-7778f8b456-tkqpc to bootstrap-e2e-minion-group-mp1q\nkube-system                  4m36s       Normal    Pulling                   pod/kubernetes-dashboard-7778f8b456-tkqpc                        Pulling image \"k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1\"\nkube-system                  4m31s       Normal    Pulled                    pod/kubernetes-dashboard-7778f8b456-tkqpc                        Successfully pulled image \"k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1\"\nkube-system                  4m30s       Normal    Created                   pod/kubernetes-dashboard-7778f8b456-tkqpc                        Created container kubernetes-dashboard\nkube-system                  4m29s       Normal    Started                   pod/kubernetes-dashboard-7778f8b456-tkqpc                        Started container kubernetes-dashboard\nkube-system                  5m2s        Normal    SuccessfulCreate          replicaset/kubernetes-dashboard-7778f8b456                       Created pod: kubernetes-dashboard-7778f8b456-tkqpc\nkube-system                  5m2s        Normal    ScalingReplicaSet         deployment/kubernetes-dashboard                                  Scaled up replica set kubernetes-dashboard-7778f8b456 to 1\nkube-system                  5m4s        Warning   FailedScheduling          pod/l7-default-backend-678889f899-7nh6w                          no nodes available to schedule pods\nkube-system                  4m46s       Warning   FailedScheduling          pod/l7-default-backend-678889f899-7nh6w                          0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                  4m43s       Normal    Scheduled                 pod/l7-default-backend-678889f899-7nh6w                          Successfully assigned kube-system/l7-default-backend-678889f899-7nh6w to bootstrap-e2e-minion-group-l1kf\nkube-system                  4m35s       Normal    Pulling                   pod/l7-default-backend-678889f899-7nh6w                          Pulling image \"k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0\"\nkube-system                  4m34s       Normal    Pulled                    pod/l7-default-backend-678889f899-7nh6w                          Successfully pulled image \"k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0\"\nkube-system                  4m34s       Normal    Created                   pod/l7-default-backend-678889f899-7nh6w                          Created container default-http-backend\nkube-system                  4m26s       Normal    Started                   pod/l7-default-backend-678889f899-7nh6w                          Started container default-http-backend\nkube-system                  5m9s        Warning   FailedCreate              replicaset/l7-default-backend-678889f899                         Error creating: pods \"l7-default-backend-678889f899-\" is forbidden: no providers available to validate pod request\nkube-system                  5m6s        Warning   FailedCreate              replicaset/l7-default-backend-678889f899                         Error creating: pods \"l7-default-backend-678889f899-\" is forbidden: unable to validate against any pod security policy: []\nkube-system                  5m4s        Normal    SuccessfulCreate          replicaset/l7-default-backend-678889f899                         Created pod: l7-default-backend-678889f899-7nh6w\nkube-system                  5m9s        Normal    ScalingReplicaSet         deployment/l7-default-backend                                    Scaled up replica set l7-default-backend-678889f899 to 1\nkube-system                  5m1s        Normal    Created                   pod/l7-lb-controller-bootstrap-e2e-master                        Created container l7-lb-controller\nkube-system                  4m59s       Normal    Started                   pod/l7-lb-controller-bootstrap-e2e-master                        Started container l7-lb-controller\nkube-system                  5m1s        Normal    Pulled                    pod/l7-lb-controller-bootstrap-e2e-master                        Container image \"k8s.gcr.io/ingress-gce-glbc-amd64:v1.6.1\" already present on machine\nkube-system                  4m54s       Normal    Scheduled                 pod/metadata-proxy-v0.1-2hrsk                                    Successfully assigned kube-system/metadata-proxy-v0.1-2hrsk to bootstrap-e2e-minion-group-mp1q\nkube-system                  4m53s       Warning   FailedMount               pod/metadata-proxy-v0.1-2hrsk                                    MountVolume.SetUp failed for volume \"metadata-proxy-token-6hblr\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                  4m51s       Normal    Pulling                   pod/metadata-proxy-v0.1-2hrsk                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                  4m49s       Normal    Pulled                    pod/metadata-proxy-v0.1-2hrsk                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                  4m48s       Normal    Created                   pod/metadata-proxy-v0.1-2hrsk                                    Created container metadata-proxy\nkube-system                  4m47s       Normal    Started                   pod/metadata-proxy-v0.1-2hrsk                                    Started container metadata-proxy\nkube-system                  4m47s       Normal    Pulling                   pod/metadata-proxy-v0.1-2hrsk                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                  4m46s       Normal    Pulled                    pod/metadata-proxy-v0.1-2hrsk                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                  4m44s       Normal    Created                   pod/metadata-proxy-v0.1-2hrsk                                    Created container prometheus-to-sd-exporter\nkube-system                  4m43s       Normal    Started                   pod/metadata-proxy-v0.1-2hrsk                                    Started container prometheus-to-sd-exporter\nkube-system                  4m57s       Normal    Scheduled                 pod/metadata-proxy-v0.1-4hnjt                                    Successfully assigned kube-system/metadata-proxy-v0.1-4hnjt to bootstrap-e2e-master\nkube-system                  4m54s       Normal    Pulling                   pod/metadata-proxy-v0.1-4hnjt                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                  4m54s       Normal    Pulled                    pod/metadata-proxy-v0.1-4hnjt                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                  4m53s       Normal    Created                   pod/metadata-proxy-v0.1-4hnjt                                    Created container metadata-proxy\nkube-system                  4m53s       Normal    Started                   pod/metadata-proxy-v0.1-4hnjt                                    Started container metadata-proxy\nkube-system                  4m53s       Normal    Pulling                   pod/metadata-proxy-v0.1-4hnjt                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                  4m51s       Normal    Pulled                    pod/metadata-proxy-v0.1-4hnjt                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                  4m50s       Normal    Created                   pod/metadata-proxy-v0.1-4hnjt                                    Created container prometheus-to-sd-exporter\nkube-system                  4m48s       Normal    Started                   pod/metadata-proxy-v0.1-4hnjt                                    Started container prometheus-to-sd-exporter\nkube-system                  4m54s       Normal    Scheduled                 pod/metadata-proxy-v0.1-8ll7f                                    Successfully assigned kube-system/metadata-proxy-v0.1-8ll7f to bootstrap-e2e-minion-group-cksd\nkube-system                  4m53s       Normal    Pulling                   pod/metadata-proxy-v0.1-8ll7f                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                  4m51s       Normal    Pulled                    pod/metadata-proxy-v0.1-8ll7f                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                  4m50s       Normal    Created                   pod/metadata-proxy-v0.1-8ll7f                                    Created container metadata-proxy\nkube-system                  4m49s       Normal    Started                   pod/metadata-proxy-v0.1-8ll7f                                    Started container metadata-proxy\nkube-system                  4m49s       Normal    Pulling                   pod/metadata-proxy-v0.1-8ll7f                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                  4m47s       Normal    Pulled                    pod/metadata-proxy-v0.1-8ll7f                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                  4m46s       Normal    Created                   pod/metadata-proxy-v0.1-8ll7f                                    Created container prometheus-to-sd-exporter\nkube-system                  4m44s       Normal    Started                   pod/metadata-proxy-v0.1-8ll7f                                    Started container prometheus-to-sd-exporter\nkube-system                  4m54s       Normal    Scheduled                 pod/metadata-proxy-v0.1-dkm8f                                    Successfully assigned kube-system/metadata-proxy-v0.1-dkm8f to bootstrap-e2e-minion-group-l1kf\nkube-system                  4m53s       Normal    Pulling                   pod/metadata-proxy-v0.1-dkm8f                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                  4m51s       Normal    Pulled                    pod/metadata-proxy-v0.1-dkm8f                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                  4m50s       Normal    Created                   pod/metadata-proxy-v0.1-dkm8f                                    Created container metadata-proxy\nkube-system                  4m48s       Normal    Started                   pod/metadata-proxy-v0.1-dkm8f                                    Started container metadata-proxy\nkube-system                  4m48s       Normal    Pulling                   pod/metadata-proxy-v0.1-dkm8f                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                  4m47s       Normal    Pulled                    pod/metadata-proxy-v0.1-dkm8f                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                  4m44s       Normal    Created                   pod/metadata-proxy-v0.1-dkm8f                                    Created container prometheus-to-sd-exporter\nkube-system                  4m43s       Normal    Started                   pod/metadata-proxy-v0.1-dkm8f                                    Started container prometheus-to-sd-exporter\nkube-system                  4m56s       Normal    Scheduled                 pod/metadata-proxy-v0.1-ltzzx                                    Successfully assigned kube-system/metadata-proxy-v0.1-ltzzx to bootstrap-e2e-minion-group-hs9p\nkube-system                  4m55s       Warning   FailedMount               pod/metadata-proxy-v0.1-ltzzx                                    MountVolume.SetUp failed for volume \"metadata-proxy-token-6hblr\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                  4m53s       Normal    Pulling                   pod/metadata-proxy-v0.1-ltzzx                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                  4m51s       Normal    Pulled                    pod/metadata-proxy-v0.1-ltzzx                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                  4m50s       Normal    Created                   pod/metadata-proxy-v0.1-ltzzx                                    Created container metadata-proxy\nkube-system                  4m49s       Normal    Started                   pod/metadata-proxy-v0.1-ltzzx                                    Started container metadata-proxy\nkube-system                  4m49s       Normal    Pulling                   pod/metadata-proxy-v0.1-ltzzx                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                  4m48s       Normal    Pulled                    pod/metadata-proxy-v0.1-ltzzx                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                  4m46s       Normal    Created                   pod/metadata-proxy-v0.1-ltzzx                                    Created container prometheus-to-sd-exporter\nkube-system                  4m44s       Normal    Started                   pod/metadata-proxy-v0.1-ltzzx                                    Started container prometheus-to-sd-exporter\nkube-system                  4m57s       Normal    SuccessfulCreate          daemonset/metadata-proxy-v0.1                                    Created pod: metadata-proxy-v0.1-4hnjt\nkube-system                  4m57s       Normal    SuccessfulCreate          daemonset/metadata-proxy-v0.1                                    Created pod: metadata-proxy-v0.1-ltzzx\nkube-system                  4m55s       Normal    SuccessfulCreate          daemonset/metadata-proxy-v0.1                                    Created pod: metadata-proxy-v0.1-8ll7f\nkube-system                  4m55s       Normal    SuccessfulCreate          daemonset/metadata-proxy-v0.1                                    Created pod: metadata-proxy-v0.1-dkm8f\nkube-system                  4m54s       Normal    SuccessfulCreate          daemonset/metadata-proxy-v0.1                                    Created pod: metadata-proxy-v0.1-2hrsk\nkube-system                  4m28s       Normal    Scheduled                 pod/metrics-server-v0.3.6-5f859c87d6-b9nsp                       Successfully assigned kube-system/metrics-server-v0.3.6-5f859c87d6-b9nsp to bootstrap-e2e-minion-group-mp1q\nkube-system                  4m27s       Normal    Pulling                   pod/metrics-server-v0.3.6-5f859c87d6-b9nsp                       Pulling image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                  4m26s       Normal    Pulled                    pod/metrics-server-v0.3.6-5f859c87d6-b9nsp                       Successfully pulled image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                  4m26s       Normal    Created                   pod/metrics-server-v0.3.6-5f859c87d6-b9nsp                       Created container metrics-server\nkube-system                  4m24s       Normal    Started                   pod/metrics-server-v0.3.6-5f859c87d6-b9nsp                       Started container metrics-server\nkube-system                  4m24s       Normal    Pulling                   pod/metrics-server-v0.3.6-5f859c87d6-b9nsp                       Pulling image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                  4m23s       Normal    Pulled                    pod/metrics-server-v0.3.6-5f859c87d6-b9nsp                       Successfully pulled image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                  4m23s       Normal    Created                   pod/metrics-server-v0.3.6-5f859c87d6-b9nsp                       Created container metrics-server-nanny\nkube-system                  4m22s       Normal    Started                   pod/metrics-server-v0.3.6-5f859c87d6-b9nsp                       Started container metrics-server-nanny\nkube-system                  4m28s       Normal    SuccessfulCreate          replicaset/metrics-server-v0.3.6-5f859c87d6                      Created pod: metrics-server-v0.3.6-5f859c87d6-b9nsp\nkube-system                  5m4s        Warning   FailedScheduling          pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        no nodes available to schedule pods\nkube-system                  4m57s       Warning   FailedScheduling          pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        0/2 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 1 node(s) were unschedulable.\nkube-system                  4m46s       Warning   FailedScheduling          pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                  4m37s       Normal    Scheduled                 pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        Successfully assigned kube-system/metrics-server-v0.3.6-65d4dc878-2hsf7 to bootstrap-e2e-minion-group-l1kf\nkube-system                  4m36s       Normal    Pulling                   pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        Pulling image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                  4m33s       Normal    Pulled                    pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        Successfully pulled image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                  4m33s       Normal    Created                   pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        Created container metrics-server\nkube-system                  4m32s       Normal    Started                   pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        Started container metrics-server\nkube-system                  4m32s       Normal    Pulling                   pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        Pulling image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                  4m29s       Normal    Pulled                    pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        Successfully pulled image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                  4m29s       Normal    Created                   pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        Created container metrics-server-nanny\nkube-system                  4m29s       Normal    Started                   pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        Started container metrics-server-nanny\nkube-system                  4m20s       Normal    Killing                   pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        Stopping container metrics-server\nkube-system                  4m20s       Normal    Killing                   pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        Stopping container metrics-server-nanny\nkube-system                  5m5s        Warning   FailedCreate              replicaset/metrics-server-v0.3.6-65d4dc878                       Error creating: pods \"metrics-server-v0.3.6-65d4dc878-\" is forbidden: unable to validate against any pod security policy: []\nkube-system                  5m4s        Normal    SuccessfulCreate          replicaset/metrics-server-v0.3.6-65d4dc878                       Created pod: metrics-server-v0.3.6-65d4dc878-2hsf7\nkube-system                  4m20s       Normal    SuccessfulDelete          replicaset/metrics-server-v0.3.6-65d4dc878                       Deleted pod: metrics-server-v0.3.6-65d4dc878-2hsf7\nkube-system                  5m6s        Normal    ScalingReplicaSet         deployment/metrics-server-v0.3.6                                 Scaled up replica set metrics-server-v0.3.6-65d4dc878 to 1\nkube-system                  4m28s       Normal    ScalingReplicaSet         deployment/metrics-server-v0.3.6                                 Scaled up replica set metrics-server-v0.3.6-5f859c87d6 to 1\nkube-system                  4m20s       Normal    ScalingReplicaSet         deployment/metrics-server-v0.3.6                                 Scaled down replica set metrics-server-v0.3.6-65d4dc878 to 0\nkube-system                  4m46s       Warning   FailedScheduling          pod/volume-snapshot-controller-0                                 0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                  4m43s       Normal    Scheduled                 pod/volume-snapshot-controller-0                                 Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-cksd\nkube-system                  4m42s       Normal    Pulling                   pod/volume-snapshot-controller-0                                 Pulling image \"quay.io/k8scsi/snapshot-controller:v2.0.0-rc2\"\nkube-system                  4m38s       Normal    Pulled                    pod/volume-snapshot-controller-0                                 Successfully pulled image \"quay.io/k8scsi/snapshot-controller:v2.0.0-rc2\"\nkube-system                  4m38s       Normal    Created                   pod/volume-snapshot-controller-0                                 Created container volume-snapshot-controller\nkube-system                  4m37s       Normal    Started                   pod/volume-snapshot-controller-0                                 Started container volume-snapshot-controller\nkube-system                  4m54s       Normal    SuccessfulCreate          statefulset/volume-snapshot-controller                           create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful\nkubectl-2531                 <unknown>                                                                                                        some data here\nkubectl-2531                 1s          Warning   FailedScheduling          pod/pod1mt9p7dghkt                                               0/5 nodes are available: 1 node(s) were unschedulable, 4 Insufficient cpu.\nkubectl-2531                 1s          Warning   FailedScheduling          pod/pod1mt9p7dghkt                                               skip schedule deleting pod: kubectl-2531/pod1mt9p7dghkt\nkubectl-2531                 2s          Warning   ProvisioningFailed        persistentvolumeclaim/pvc1mt9p7dghkt                             Failed to provision volume with StorageClass \"standard\": claim.Spec.Selector is not supported for dynamic provisioning on GCE\nkubectl-8951                 2s          Normal    Scheduled                 pod/pause                                                        Successfully assigned kubectl-8951/pause to bootstrap-e2e-minion-group-cksd\nport-forwarding-6964         20s         Normal    Scheduled                 pod/pfpod                                                        Successfully assigned port-forwarding-6964/pfpod to bootstrap-e2e-minion-group-mp1q\nport-forwarding-6964         19s         Warning   FailedMount               pod/pfpod                                                        MountVolume.SetUp failed for volume \"default-token-vhmgg\" : failed to sync secret cache: timed out waiting for the condition\nport-forwarding-6964         17s         Normal    Pulled                    pod/pfpod                                                        Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nport-forwarding-6964         17s         Normal    Created                   pod/pfpod                                                        Created container readiness\nport-forwarding-6964         17s         Normal    Started                   pod/pfpod                                                        Started container readiness\nport-forwarding-6964         17s         Normal    Pulled                    pod/pfpod                                                        Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nport-forwarding-6964         17s         Normal    Created                   pod/pfpod                                                        Created container portforwardtester\nport-forwarding-6964         17s         Normal    Started                   pod/pfpod                                                        Started container portforwardtester\nport-forwarding-6964         1s          Warning   Unhealthy                 pod/pfpod                                                        Readiness probe failed:\nprojected-24                 22s         Normal    Scheduled                 pod/labelsupdatea40418f8-51bb-4c67-83ed-6facca8fefcf             Successfully assigned projected-24/labelsupdatea40418f8-51bb-4c67-83ed-6facca8fefcf to bootstrap-e2e-minion-group-hs9p\nprojected-24                 18s         Normal    Pulled                    pod/labelsupdatea40418f8-51bb-4c67-83ed-6facca8fefcf             Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprojected-24                 17s         Normal    Created                   pod/labelsupdatea40418f8-51bb-4c67-83ed-6facca8fefcf             Created container client-container\nprojected-24                 16s         Normal    Started                   pod/labelsupdatea40418f8-51bb-4c67-83ed-6facca8fefcf             Started container client-container\nprojected-9107               24s         Normal    Scheduled                 pod/pod-projected-secrets-e4e60c08-01e4-45ff-ae5b-f3b9a8bb5221   Successfully assigned projected-9107/pod-projected-secrets-e4e60c08-01e4-45ff-ae5b-f3b9a8bb5221 to bootstrap-e2e-minion-group-cksd\nprojected-9107               22s         Normal    Pulled                    pod/pod-projected-secrets-e4e60c08-01e4-45ff-ae5b-f3b9a8bb5221   Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprojected-9107               22s         Normal    Created                   pod/pod-projected-secrets-e4e60c08-01e4-45ff-ae5b-f3b9a8bb5221   Created container projected-secret-volume-test\nprojected-9107               21s         Normal    Started                   pod/pod-projected-secrets-e4e60c08-01e4-45ff-ae5b-f3b9a8bb5221   Started container projected-secret-volume-test\nprovisioning-1359            93s         Normal    Pulling                   pod/csi-hostpath-attacher-0                                      Pulling image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\nprovisioning-1359            80s         Normal    Pulled                    pod/csi-hostpath-attacher-0                                      Successfully pulled image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\nprovisioning-1359            80s         Normal    Created                   pod/csi-hostpath-attacher-0                                      Created container csi-attacher\nprovisioning-1359            78s         Normal    Started                   pod/csi-hostpath-attacher-0                                      Started container csi-attacher\nprovisioning-1359            34s         Normal    Killing                   pod/csi-hostpath-attacher-0                                      Stopping container csi-attacher\nprovisioning-1359            99s         Warning   FailedCreate              statefulset/csi-hostpath-attacher                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-1359            97s         Normal    SuccessfulCreate          statefulset/csi-hostpath-attacher                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nprovisioning-1359            93s         Normal    Pulling                   pod/csi-hostpath-provisioner-0                                   Pulling image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\nprovisioning-1359            80s         Normal    Pulled                    pod/csi-hostpath-provisioner-0                                   Successfully pulled image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\nprovisioning-1359            80s         Normal    Created                   pod/csi-hostpath-provisioner-0                                   Created container csi-provisioner\nprovisioning-1359            78s         Normal    Started                   pod/csi-hostpath-provisioner-0                                   Started container csi-provisioner\nprovisioning-1359            31s         Normal    Killing                   pod/csi-hostpath-provisioner-0                                   Stopping container csi-provisioner\nprovisioning-1359            98s         Warning   FailedCreate              statefulset/csi-hostpath-provisioner                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-1359            97s         Normal    SuccessfulCreate          statefulset/csi-hostpath-provisioner                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nprovisioning-1359            94s         Normal    Pulling                   pod/csi-hostpath-resizer-0                                       Pulling image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\nprovisioning-1359            82s         Normal    Pulled                    pod/csi-hostpath-resizer-0                                       Successfully pulled image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\nprovisioning-1359            29s         Normal    Created                   pod/csi-hostpath-resizer-0                                       Created container csi-resizer\nprovisioning-1359            29s         Normal    Started                   pod/csi-hostpath-resizer-0                                       Started container csi-resizer\nprovisioning-1359            29s         Normal    Pulled                    pod/csi-hostpath-resizer-0                                       Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\nprovisioning-1359            26s         Warning   FailedMount               pod/csi-hostpath-resizer-0                                       MountVolume.SetUp failed for volume \"csi-resizer-token-clq5t\" : secret \"csi-resizer-token-clq5t\" not found\nprovisioning-1359            98s         Warning   FailedCreate              statefulset/csi-hostpath-resizer                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-1359            97s         Normal    SuccessfulCreate          statefulset/csi-hostpath-resizer                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nprovisioning-1359            87s         Normal    ExternalProvisioning      persistentvolumeclaim/csi-hostpathfjk29                          waiting for a volume to be created, either by external provisioner \"csi-hostpath-provisioning-1359\" or manually created by system administrator\nprovisioning-1359            77s         Normal    Provisioning              persistentvolumeclaim/csi-hostpathfjk29                          External provisioner is provisioning volume for claim \"provisioning-1359/csi-hostpathfjk29\"\nprovisioning-1359            77s         Normal    ProvisioningSucceeded     persistentvolumeclaim/csi-hostpathfjk29                          Successfully provisioned volume pvc-01f67fed-cc92-4d22-9139-7ed6130e4f80\nprovisioning-1359            98s         Normal    Pulling                   pod/csi-hostpathplugin-0                                         Pulling image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\nprovisioning-1359            97s         Normal    Pulled                    pod/csi-hostpathplugin-0                                         Successfully pulled image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\nprovisioning-1359            96s         Normal    Created                   pod/csi-hostpathplugin-0                                         Created container node-driver-registrar\nprovisioning-1359            96s         Normal    Started                   pod/csi-hostpathplugin-0                                         Started container node-driver-registrar\nprovisioning-1359            96s         Normal    Pulling                   pod/csi-hostpathplugin-0                                         Pulling image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nprovisioning-1359            87s         Normal    Pulled                    pod/csi-hostpathplugin-0                                         Successfully pulled image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nprovisioning-1359            87s         Normal    Created                   pod/csi-hostpathplugin-0                                         Created container hostpath\nprovisioning-1359            85s         Normal    Started                   pod/csi-hostpathplugin-0                                         Started container hostpath\nprovisioning-1359            85s         Normal    Pulling                   pod/csi-hostpathplugin-0                                         Pulling image \"quay.io/k8scsi/livenessprobe:v1.1.0\"\nprovisioning-1359            80s         Normal    Pulled                    pod/csi-hostpathplugin-0                                         Successfully pulled image \"quay.io/k8scsi/livenessprobe:v1.1.0\"\nprovisioning-1359            79s         Normal    Created                   pod/csi-hostpathplugin-0                                         Created container liveness-probe\nprovisioning-1359            78s         Normal    Started                   pod/csi-hostpathplugin-0                                         Started container liveness-probe\nprovisioning-1359            32s         Normal    Killing                   pod/csi-hostpathplugin-0                                         Stopping container node-driver-registrar\nprovisioning-1359            33s         Normal    Killing                   pod/csi-hostpathplugin-0                                         Stopping container liveness-probe\nprovisioning-1359            33s         Normal    Killing                   pod/csi-hostpathplugin-0                                         Stopping container hostpath\nprovisioning-1359            31s         Warning   Unhealthy                 pod/csi-hostpathplugin-0                                         Liveness probe failed: Get http://10.64.2.5:9898/healthz: dial tcp 10.64.2.5:9898: connect: connection refused\nprovisioning-1359            31s         Warning   FailedPreStopHook         pod/csi-hostpathplugin-0                                         Exec lifecycle hook ([/bin/sh -c rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock]) for Container \"node-driver-registrar\" in Pod \"csi-hostpathplugin-0_provisioning-1359(56423ae3-7ce0-44f1-b835-01e3c9d9b664)\" failed - error: command '/bin/sh -c rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock' exited with 126: , message: \"OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused \\\"exec: \\\\\\\"/bin/sh\\\\\\\": stat /bin/sh: no such file or directory\\\": unknown\\r\\n\"\nprovisioning-1359            101s        Normal    SuccessfulCreate          statefulset/csi-hostpathplugin                                   create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nprovisioning-1359            96s         Normal    Pulling                   pod/csi-snapshotter-0                                            Pulling image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\"\nprovisioning-1359            88s         Normal    Pulled                    pod/csi-snapshotter-0                                            Successfully pulled image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\"\nprovisioning-1359            30s         Normal    Created                   pod/csi-snapshotter-0                                            Created container csi-snapshotter\nprovisioning-1359            30s         Normal    Started                   pod/csi-snapshotter-0                                            Started container csi-snapshotter\nprovisioning-1359            31s         Normal    Pulled                    pod/csi-snapshotter-0                                            Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nprovisioning-1359            23s         Warning   FailedMount               pod/csi-snapshotter-0                                            MountVolume.SetUp failed for volume \"csi-snapshotter-token-gw4jt\" : secret \"csi-snapshotter-token-gw4jt\" not found\nprovisioning-1359            98s         Warning   FailedCreate              statefulset/csi-snapshotter                                      create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter failed error: pods \"csi-snapshotter-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-1359            98s         Normal    SuccessfulCreate          statefulset/csi-snapshotter                                      create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nprovisioning-1359            74s         Normal    SuccessfulAttachVolume    pod/pod-subpath-test-dynamicpv-dwnl                              AttachVolume.Attach succeeded for volume \"pvc-01f67fed-cc92-4d22-9139-7ed6130e4f80\"\nprovisioning-1359            65s         Normal    Pulled                    pod/pod-subpath-test-dynamicpv-dwnl                              Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-1359            65s         Normal    Created                   pod/pod-subpath-test-dynamicpv-dwnl                              Created container init-volume-dynamicpv-dwnl\nprovisioning-1359            64s         Normal    Started                   pod/pod-subpath-test-dynamicpv-dwnl                              Started container init-volume-dynamicpv-dwnl\nprovisioning-1359            60s         Normal    Pulled                    pod/pod-subpath-test-dynamicpv-dwnl                              Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-1359            60s         Normal    Created                   pod/pod-subpath-test-dynamicpv-dwnl                              Created container test-container-subpath-dynamicpv-dwnl\nprovisioning-1359            59s         Normal    Started                   pod/pod-subpath-test-dynamicpv-dwnl                              Started container test-container-subpath-dynamicpv-dwnl\nprovisioning-1561            61s         Normal    Pulled                    pod/hostexec-bootstrap-e2e-minion-group-cksd-l4ph7               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-1561            61s         Normal    Created                   pod/hostexec-bootstrap-e2e-minion-group-cksd-l4ph7               Created container agnhost\nprovisioning-1561            60s         Normal    Started                   pod/hostexec-bootstrap-e2e-minion-group-cksd-l4ph7               Started container agnhost\nprovisioning-1561            10s         Normal    Killing                   pod/hostexec-bootstrap-e2e-minion-group-cksd-l4ph7               Stopping container agnhost\nprovisioning-1561            39s         Warning   FailedMount               pod/pod-subpath-test-preprovisionedpv-bzfc                       Unable to attach or mount volumes: unmounted volumes=[test-volume liveness-probe-volume default-token-d6mdx], unattached volumes=[test-volume liveness-probe-volume default-token-d6mdx]: error processing PVC provisioning-1561/pvc-5h5q7: failed to fetch PVC from API server: persistentvolumeclaims \"pvc-5h5q7\" is forbidden: User \"system:node:bootstrap-e2e-minion-group-cksd\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"provisioning-1561\": no relationship found between node \"bootstrap-e2e-minion-group-cksd\" and this object\nprovisioning-1561            25s         Normal    Pulled                    pod/pod-subpath-test-preprovisionedpv-bzfc                       Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-1561            24s         Normal    Created                   pod/pod-subpath-test-preprovisionedpv-bzfc                       Created container init-volume-preprovisionedpv-bzfc\nprovisioning-1561            24s         Normal    Started                   pod/pod-subpath-test-preprovisionedpv-bzfc                       Started container init-volume-preprovisionedpv-bzfc\nprovisioning-1561            23s         Normal    Pulled                    pod/pod-subpath-test-preprovisionedpv-bzfc                       Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-1561            23s         Normal    Created                   pod/pod-subpath-test-preprovisionedpv-bzfc                       Created container test-init-volume-preprovisionedpv-bzfc\nprovisioning-1561            22s         Normal    Started                   pod/pod-subpath-test-preprovisionedpv-bzfc                       Started container test-init-volume-preprovisionedpv-bzfc\nprovisioning-1561            22s         Normal    Pulled                    pod/pod-subpath-test-preprovisionedpv-bzfc                       Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-1561            21s         Normal    Created                   pod/pod-subpath-test-preprovisionedpv-bzfc                       Created container test-container-subpath-preprovisionedpv-bzfc\nprovisioning-1561            21s         Normal    Started                   pod/pod-subpath-test-preprovisionedpv-bzfc                       Started container test-container-subpath-preprovisionedpv-bzfc\nprovisioning-1561            49s         Warning   ProvisioningFailed        persistentvolumeclaim/pvc-5h5q7                                  storageclass.storage.k8s.io \"provisioning-1561\" not found\nprovisioning-2307            60s         Normal    Pulled                    pod/hostexec-bootstrap-e2e-minion-group-l1kf-lnx6f               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-2307            60s         Normal    Created                   pod/hostexec-bootstrap-e2e-minion-group-l1kf-lnx6f               Created container agnhost\nprovisioning-2307            59s         Normal    Started                   pod/hostexec-bootstrap-e2e-minion-group-l1kf-lnx6f               Started container agnhost\nprovisioning-2307            24s         Normal    Killing                   pod/hostexec-bootstrap-e2e-minion-group-l1kf-lnx6f               Stopping container agnhost\nprovisioning-2307            36s         Normal    Pulled                    pod/pod-subpath-test-preprovisionedpv-zv6l                       Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-2307            36s         Normal    Created                   pod/pod-subpath-test-preprovisionedpv-zv6l                       Created container init-volume-preprovisionedpv-zv6l\nprovisioning-2307            35s         Normal    Started                   pod/pod-subpath-test-preprovisionedpv-zv6l                       Started container init-volume-preprovisionedpv-zv6l\nprovisioning-2307            34s         Normal    Pulled                    pod/pod-subpath-test-preprovisionedpv-zv6l                       Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-2307            33s         Normal    Created                   pod/pod-subpath-test-preprovisionedpv-zv6l                       Created container test-init-volume-preprovisionedpv-zv6l\nprovisioning-2307            32s         Normal    Started                   pod/pod-subpath-test-preprovisionedpv-zv6l                       Started container test-init-volume-preprovisionedpv-zv6l\nprovisioning-2307            31s         Normal    Pulled                    pod/pod-subpath-test-preprovisionedpv-zv6l                       Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-2307            30s         Normal    Created                   pod/pod-subpath-test-preprovisionedpv-zv6l                       Created container test-container-subpath-preprovisionedpv-zv6l\nprovisioning-2307            30s         Normal    Started                   pod/pod-subpath-test-preprovisionedpv-zv6l                       Started container test-container-subpath-preprovisionedpv-zv6l\nprovisioning-2307            54s         Warning   ProvisioningFailed        persistentvolumeclaim/pvc-m6kd5                                  storageclass.storage.k8s.io \"provisioning-2307\" not found\nprovisioning-2688            35s         Normal    Scheduled                 pod/external-provisioner-cn765                                   Successfully assigned provisioning-2688/external-provisioner-cn765 to bootstrap-e2e-minion-group-hs9p\nprovisioning-2688            32s         Normal    Pulling                   pod/external-provisioner-cn765                                   Pulling image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\"\nprovisioning-2872            4s          Normal    Pulled                    pod/pod-subpath-test-inlinevolume-hp2b                           Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-2872            3s          Normal    Created                   pod/pod-subpath-test-inlinevolume-hp2b                           Created container init-volume-inlinevolume-hp2b\nprovisioning-2872            2s          Normal    Started                   pod/pod-subpath-test-inlinevolume-hp2b                           Started container init-volume-inlinevolume-hp2b\nprovisioning-4887            37s         Warning   FailedMount               pod/hostpath-symlink-prep-provisioning-4887                      MountVolume.SetUp failed for volume \"default-token-6hlk7\" : failed to sync secret cache: timed out waiting for the condition\nprovisioning-4887            34s         Normal    Pulled                    pod/hostpath-symlink-prep-provisioning-4887                      Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-4887            34s         Normal    Created                   pod/hostpath-symlink-prep-provisioning-4887                      Created container init-volume-provisioning-4887\nprovisioning-4887            32s         Normal    Started                   pod/hostpath-symlink-prep-provisioning-4887                      Started container init-volume-provisioning-4887\nprovisioning-4887            8s          Warning   FailedMount               pod/hostpath-symlink-prep-provisioning-4887                      MountVolume.SetUp failed for volume \"default-token-6hlk7\" : failed to sync secret cache: timed out waiting for the condition\nprovisioning-4887            6s          Normal    Pulled                    pod/hostpath-symlink-prep-provisioning-4887                      Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-4887            5s          Normal    Created                   pod/hostpath-symlink-prep-provisioning-4887                      Created container init-volume-provisioning-4887\nprovisioning-4887            5s          Normal    Started                   pod/hostpath-symlink-prep-provisioning-4887                      Started container init-volume-provisioning-4887\nprovisioning-4887            25s         Normal    Pulled                    pod/pod-subpath-test-inlinevolume-drt4                           Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-4887            25s         Normal    Created                   pod/pod-subpath-test-inlinevolume-drt4                           Created container init-volume-inlinevolume-drt4\nprovisioning-4887            24s         Normal    Started                   pod/pod-subpath-test-inlinevolume-drt4                           Started container init-volume-inlinevolume-drt4\nprovisioning-4887            23s         Normal    Pulled                    pod/pod-subpath-test-inlinevolume-drt4                           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4887            23s         Normal    Created                   pod/pod-subpath-test-inlinevolume-drt4                           Created container test-init-volume-inlinevolume-drt4\nprovisioning-4887            20s         Normal    Started                   pod/pod-subpath-test-inlinevolume-drt4                           Started container test-init-volume-inlinevolume-drt4\nprovisioning-4887            17s         Normal    Pulled                    pod/pod-subpath-test-inlinevolume-drt4                           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4887            17s         Normal    Created                   pod/pod-subpath-test-inlinevolume-drt4                           Created container test-container-subpath-inlinevolume-drt4\nprovisioning-4887            16s         Normal    Started                   pod/pod-subpath-test-inlinevolume-drt4                           Started container test-container-subpath-inlinevolume-drt4\nprovisioning-6230            29s         Normal    LeaderElection            endpoints/example.com-nfs-provisioning-6230                      external-provisioner-gmfzp_9fc2b5be-a821-46bc-9039-2d9a03a0e5d8 became leader\nprovisioning-6230            53s         Normal    Scheduled                 pod/external-provisioner-gmfzp                                   Successfully assigned provisioning-6230/external-provisioner-gmfzp to bootstrap-e2e-minion-group-mp1q\nprovisioning-6230            52s         Warning   FailedMount               pod/external-provisioner-gmfzp                                   MountVolume.SetUp failed for volume \"default-token-nkkmc\" : failed to sync secret cache: timed out waiting for the condition\nprovisioning-6230            51s         Normal    Pulling                   pod/external-provisioner-gmfzp                                   Pulling image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\"\nprovisioning-6230            36s         Normal    Pulled                    pod/external-provisioner-gmfzp                                   Successfully pulled image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\"\nprovisioning-6230            35s         Normal    Created                   pod/external-provisioner-gmfzp                                   Created container nfs-provisioner\nprovisioning-6230            35s         Normal    Started                   pod/external-provisioner-gmfzp                                   Started container nfs-provisioner\nprovisioning-6230            29s         Normal    Provisioning              persistentvolumeclaim/nfsnwtgh                                   External provisioner is provisioning volume for claim \"provisioning-6230/nfsnwtgh\"\nprovisioning-6230            29s         Normal    ExternalProvisioning      persistentvolumeclaim/nfsnwtgh                                   waiting for a volume to be created, either by external provisioner \"example.com/nfs-provisioning-6230\" or manually created by system administrator\nprovisioning-6230            28s         Normal    ProvisioningSucceeded     persistentvolumeclaim/nfsnwtgh                                   Successfully provisioned volume pvc-0f2b209f-0879-46de-8188-068aaf8bdd4d\nprovisioning-6230            26s         Normal    Scheduled                 pod/pod-subpath-test-dynamicpv-n6w9                              Successfully assigned provisioning-6230/pod-subpath-test-dynamicpv-n6w9 to bootstrap-e2e-minion-group-hs9p\nprovisioning-6230            20s         Normal    Pulled                    pod/pod-subpath-test-dynamicpv-n6w9                              Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-6230            20s         Normal    Created                   pod/pod-subpath-test-dynamicpv-n6w9                              Created container init-volume-dynamicpv-n6w9\nprovisioning-6230            19s         Normal    Started                   pod/pod-subpath-test-dynamicpv-n6w9                              Started container init-volume-dynamicpv-n6w9\nprovisioning-6230            19s         Normal    Pulled                    pod/pod-subpath-test-dynamicpv-n6w9                              Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-6230            18s         Normal    Created                   pod/pod-subpath-test-dynamicpv-n6w9                              Created container test-init-subpath-dynamicpv-n6w9\nprovisioning-6230            17s         Normal    Started                   pod/pod-subpath-test-dynamicpv-n6w9                              Started container test-init-subpath-dynamicpv-n6w9\nprovisioning-6230            16s         Normal    Pulled                    pod/pod-subpath-test-dynamicpv-n6w9                              Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-6230            15s         Normal    Created                   pod/pod-subpath-test-dynamicpv-n6w9                              Created container test-container-subpath-dynamicpv-n6w9\nprovisioning-6230            14s         Normal    Started                   pod/pod-subpath-test-dynamicpv-n6w9                              Started container test-container-subpath-dynamicpv-n6w9\nprovisioning-6230            9s          Normal    Scheduled                 pod/pod-subpath-test-dynamicpv-n6w9                              Successfully assigned provisioning-6230/pod-subpath-test-dynamicpv-n6w9 to bootstrap-e2e-minion-group-cksd\nprovisioning-6230            3s          Normal    Pulled                    pod/pod-subpath-test-dynamicpv-n6w9                              Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-6230            3s          Normal    Created                   pod/pod-subpath-test-dynamicpv-n6w9                              Created container test-container-subpath-dynamicpv-n6w9\nprovisioning-6230            2s          Normal    Started                   pod/pod-subpath-test-dynamicpv-n6w9                              Started container test-container-subpath-dynamicpv-n6w9\nprovisioning-8481            49s         Normal    Scheduled                 pod/pod-subpath-test-inlinevolume-nzw8                           Successfully assigned provisioning-8481/pod-subpath-test-inlinevolume-nzw8 to bootstrap-e2e-minion-group-hs9p\nprovisioning-8481            47s         Normal    Pulled                    pod/pod-subpath-test-inlinevolume-nzw8                           Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-8481            47s         Normal    Created                   pod/pod-subpath-test-inlinevolume-nzw8                           Created container init-volume-inlinevolume-nzw8\nprovisioning-8481            46s         Normal    Started                   pod/pod-subpath-test-inlinevolume-nzw8                           Started container init-volume-inlinevolume-nzw8\nprovisioning-8481            45s         Normal    Pulled                    pod/pod-subpath-test-inlinevolume-nzw8                           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-8481            44s         Normal    Created                   pod/pod-subpath-test-inlinevolume-nzw8                           Created container test-container-subpath-inlinevolume-nzw8\nprovisioning-8481            44s         Normal    Started                   pod/pod-subpath-test-inlinevolume-nzw8                           Started container test-container-subpath-inlinevolume-nzw8\nprovisioning-8537            38s         Normal    Scheduled                 pod/external-provisioner-5jdkq                                   Successfully assigned provisioning-8537/external-provisioner-5jdkq to bootstrap-e2e-minion-group-l1kf\nprovisioning-8537            35s         Normal    Pulling                   pod/external-provisioner-5jdkq                                   Pulling image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\"\nprovisioning-9355            28s         Normal    Pulled                    pod/hostexec-bootstrap-e2e-minion-group-mp1q-zd6q2               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-9355            28s         Normal    Created                   pod/hostexec-bootstrap-e2e-minion-group-mp1q-zd6q2               Created container agnhost\nprovisioning-9355            27s         Normal    Started                   pod/hostexec-bootstrap-e2e-minion-group-mp1q-zd6q2               Started container agnhost\nprovisioning-9355            4s          Normal    Pulling                   pod/pod-subpath-test-preprovisionedpv-zlzp                       Pulling image \"docker.io/library/busybox:1.29\"\nprovisioning-9355            3s          Normal    Pulled                    pod/pod-subpath-test-preprovisionedpv-zlzp                       Successfully pulled image \"docker.io/library/busybox:1.29\"\nprovisioning-9355            3s          Normal    Created                   pod/pod-subpath-test-preprovisionedpv-zlzp                       Created container init-volume-preprovisionedpv-zlzp\nprovisioning-9355            2s          Normal    Started                   pod/pod-subpath-test-preprovisionedpv-zlzp                       Started container init-volume-preprovisionedpv-zlzp\nprovisioning-9355            23s         Warning   ProvisioningFailed        persistentvolumeclaim/pvc-gxg74                                  storageclass.storage.k8s.io \"provisioning-9355\" not found\npv-2914                      47s         Normal    Scheduled                 pod/nfs-server                                                   Successfully assigned pv-2914/nfs-server to bootstrap-e2e-minion-group-hs9p\npv-2914                      45s         Normal    Pulling                   pod/nfs-server                                                   Pulling image \"gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0\"\npv-2914                      13s         Normal    Pulled                    pod/nfs-server                                                   Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0\"\npv-2914                      13s         Normal    Created                   pod/nfs-server                                                   Created container nfs-server\npv-2914                      13s         Normal    Started                   pod/nfs-server                                                   Started container nfs-server\npv-2914                      3s          Normal    Scheduled                 pod/pvc-tester-csqd9                                             Successfully assigned pv-2914/pvc-tester-csqd9 to bootstrap-e2e-minion-group-hs9p\nsecurity-context-7437        4s          Normal    Scheduled                 pod/security-context-bcbfffc0-d04f-4c25-b238-368ee6ebdda3        Successfully assigned security-context-7437/security-context-bcbfffc0-d04f-4c25-b238-368ee6ebdda3 to bootstrap-e2e-minion-group-l1kf\nsecurity-context-test-6061   3s          Normal    Scheduled                 pod/alpine-nnp-nil-bd64228b-dfd2-4038-9364-7a42f2202c4f          Successfully assigned security-context-test-6061/alpine-nnp-nil-bd64228b-dfd2-4038-9364-7a42f2202c4f to bootstrap-e2e-minion-group-hs9p\nservices-135                 14s         Normal    Scheduled                 pod/hostexec                                                     Successfully assigned services-135/hostexec to bootstrap-e2e-minion-group-mp1q\nservices-135                 13s         Normal    Pulled                    pod/hostexec                                                     Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-135                 13s         Normal    Created                   pod/hostexec                                                     Created container agnhost\nservices-135                 12s         Normal    Started                   pod/hostexec                                                     Started container agnhost\nservices-5413                26s         Normal    Scheduled                 pod/execpod24s8x                                                 Successfully assigned services-5413/execpod24s8x to bootstrap-e2e-minion-group-hs9p\nservices-5413                23s         Normal    Pulled                    pod/execpod24s8x                                                 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-5413                23s         Normal    Created                   pod/execpod24s8x                                                 Created container agnhost-pause\nservices-5413                21s         Normal    Started                   pod/execpod24s8x                                                 Started container agnhost-pause\nservices-5413                34s         Normal    Scheduled                 pod/externalname-service-5f6kw                                   Successfully assigned services-5413/externalname-service-5f6kw to bootstrap-e2e-minion-group-cksd\nservices-5413                32s         Normal    Pulled                    pod/externalname-service-5f6kw                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-5413                32s         Normal    Created                   pod/externalname-service-5f6kw                                   Created container externalname-service\nservices-5413                31s         Normal    Started                   pod/externalname-service-5f6kw                                   Started container externalname-service\nservices-5413                35s         Normal    Scheduled                 pod/externalname-service-zpns9                                   Successfully assigned services-5413/externalname-service-zpns9 to bootstrap-e2e-minion-group-l1kf\nservices-5413                31s         Normal    Pulled                    pod/externalname-service-zpns9                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-5413                31s         Normal    Created                   pod/externalname-service-zpns9                                   Created container externalname-service\nservices-5413                30s         Normal    Started                   pod/externalname-service-zpns9                                   Started container externalname-service\nservices-5413                35s         Normal    SuccessfulCreate          replicationcontroller/externalname-service                       Created pod: externalname-service-zpns9\nservices-5413                35s         Normal    SuccessfulCreate          replicationcontroller/externalname-service                       Created pod: externalname-service-5f6kw\nservices-5744                52s         Normal    Scheduled                 pod/execpod9mmhl                                                 Successfully assigned services-5744/execpod9mmhl to bootstrap-e2e-minion-group-l1kf\nservices-5744                51s         Normal    Pulled                    pod/execpod9mmhl                                                 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-5744                51s         Normal    Created                   pod/execpod9mmhl                                                 Created container agnhost-pause\nservices-5744                51s         Normal    Started                   pod/execpod9mmhl                                                 Started container agnhost-pause\nservices-5744                63s         Normal    Scheduled                 pod/externalsvc-6l4f2                                            Successfully assigned services-5744/externalsvc-6l4f2 to bootstrap-e2e-minion-group-mp1q\nservices-5744                61s         Normal    Pulled                    pod/externalsvc-6l4f2                                            Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-5744                61s         Normal    Created                   pod/externalsvc-6l4f2                                            Created container externalsvc\nservices-5744                61s         Normal    Started                   pod/externalsvc-6l4f2                                            Started container externalsvc\nservices-5744                44s         Normal    Killing                   pod/externalsvc-6l4f2                                            Stopping container externalsvc\nservices-5744                63s         Normal    Scheduled                 pod/externalsvc-ff22c                                            Successfully assigned services-5744/externalsvc-ff22c to bootstrap-e2e-minion-group-hs9p\nservices-5744                61s         Normal    Pulled                    pod/externalsvc-ff22c                                            Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-5744                61s         Normal    Created                   pod/externalsvc-ff22c                                            Created container externalsvc\nservices-5744                60s         Normal    Started                   pod/externalsvc-ff22c                                            Started container externalsvc\nservices-5744                44s         Normal    Killing                   pod/externalsvc-ff22c                                            Stopping container externalsvc\nservices-5744                64s         Normal    SuccessfulCreate          replicationcontroller/externalsvc                                Created pod: externalsvc-ff22c\nservices-5744                63s         Normal    SuccessfulCreate          replicationcontroller/externalsvc                                Created pod: externalsvc-6l4f2\nstatefulset-1548             97s         Normal    Scheduled                 pod/ss2-0                                                        Successfully assigned statefulset-1548/ss2-0 to bootstrap-e2e-minion-group-l1kf\nstatefulset-1548             96s         Warning   FailedMount               pod/ss2-0                                                        MountVolume.SetUp failed for volume \"default-token-sslzp\" : failed to sync secret cache: timed out waiting for the condition\nstatefulset-1548             93s         Normal    Pulled                    pod/ss2-0                                                        Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-1548             93s         Normal    Created                   pod/ss2-0                                                        Created container webserver\nstatefulset-1548             93s         Normal    Started                   pod/ss2-0                                                        Started container webserver\nstatefulset-1548             48s         Normal    Killing                   pod/ss2-0                                                        Stopping container webserver\nstatefulset-1548             48s         Normal    Scheduled                 pod/ss2-0                                                        Successfully assigned statefulset-1548/ss2-0 to bootstrap-e2e-minion-group-l1kf\nstatefulset-1548             45s         Normal    Pulled                    pod/ss2-0                                                        Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-1548             45s         Normal    Created                   pod/ss2-0                                                        Created container webserver\nstatefulset-1548             44s         Normal    Started                   pod/ss2-0                                                        Started container webserver\nstatefulset-1548             90s         Normal    Scheduled                 pod/ss2-1                                                        Successfully assigned statefulset-1548/ss2-1 to bootstrap-e2e-minion-group-hs9p\nstatefulset-1548             88s         Normal    Pulling                   pod/ss2-1                                                        Pulling image \"docker.io/library/httpd:2.4.38-alpine\"\nstatefulset-1548             77s         Normal    Pulled                    pod/ss2-1                                                        Successfully pulled image \"docker.io/library/httpd:2.4.38-alpine\"\nstatefulset-1548             74s         Normal    Created                   pod/ss2-1                                                        Created container webserver\nstatefulset-1548             74s         Normal    Started                   pod/ss2-1                                                        Started container webserver\nstatefulset-1548             47s         Normal    Killing                   pod/ss2-1                                                        Stopping container webserver\nstatefulset-1548             42s         Normal    Scheduled                 pod/ss2-1                                                        Successfully assigned statefulset-1548/ss2-1 to bootstrap-e2e-minion-group-hs9p\nstatefulset-1548             38s         Normal    Pulled                    pod/ss2-1                                                        Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-1548             38s         Normal    Created                   pod/ss2-1                                                        Created container webserver\nstatefulset-1548             37s         Normal    Started                   pod/ss2-1                                                        Started container webserver\nstatefulset-1548             66s         Normal    Scheduled                 pod/ss2-2                                                        Successfully assigned statefulset-1548/ss2-2 to bootstrap-e2e-minion-group-hs9p\nstatefulset-1548             63s         Normal    Pulled                    pod/ss2-2                                                        Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-1548             63s         Normal    Created                   pod/ss2-2                                                        Created container webserver\nstatefulset-1548             62s         Normal    Started                   pod/ss2-2                                                        Started container webserver\nstatefulset-1548             47s         Normal    Killing                   pod/ss2-2                                                        Stopping container webserver\nstatefulset-1548             31s         Normal    Scheduled                 pod/ss2-2                                                        Successfully assigned statefulset-1548/ss2-2 to bootstrap-e2e-minion-group-cksd\nstatefulset-1548             29s         Normal    Pulling                   pod/ss2-2                                                        Pulling image \"docker.io/library/httpd:2.4.38-alpine\"\nstatefulset-1548             19s         Normal    Pulled                    pod/ss2-2                                                        Successfully pulled image \"docker.io/library/httpd:2.4.38-alpine\"\nstatefulset-1548             19s         Normal    Created                   pod/ss2-2                                                        Created container webserver\nstatefulset-1548             19s         Normal    Started                   pod/ss2-2                                                        Started container webserver\nstatefulset-1548             48s         Normal    SuccessfulCreate          statefulset/ss2                                                  create Pod ss2-0 in StatefulSet ss2 successful\nstatefulset-1548             42s         Normal    SuccessfulCreate          statefulset/ss2                                                  create Pod ss2-1 in StatefulSet ss2 successful\nstatefulset-1548             31s         Normal    SuccessfulCreate          statefulset/ss2                                                  create Pod ss2-2 in StatefulSet ss2 successful\nstatefulset-1548             47s         Warning   FailedToUpdateEndpoint    endpoints/test                                                   Failed to update endpoint statefulset-1548/test: Operation cannot be fulfilled on endpoints \"test\": the object has been modified; please apply your changes to the latest version and try again\nvolume-2853                  9s          Normal    Scheduled                 pod/exec-volume-test-preprovisionedpv-rg5z                       Successfully assigned volume-2853/exec-volume-test-preprovisionedpv-rg5z to bootstrap-e2e-minion-group-cksd\nvolume-2853                  9s          Warning   FailedMount               pod/exec-volume-test-preprovisionedpv-rg5z                       Unable to attach or mount volumes: unmounted volumes=[default-token-72rwq vol1], unattached volumes=[default-token-72rwq vol1]: error processing PVC volume-2853/pvc-x2vpz: failed to fetch PVC from API server: persistentvolumeclaims \"pvc-x2vpz\" is forbidden: User \"system:node:bootstrap-e2e-minion-group-cksd\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"volume-2853\": no relationship found between node \"bootstrap-e2e-minion-group-cksd\" and this object\nvolume-2853                  4s          Normal    SuccessfulAttachVolume    pod/exec-volume-test-preprovisionedpv-rg5z                       AttachVolume.Attach succeeded for volume \"gcepd-gx77m\"\nvolume-2853                  27s         Warning   ProvisioningFailed        persistentvolumeclaim/pvc-x2vpz                                  storageclass.storage.k8s.io \"volume-2853\" not found\nvolume-3746                  39s         Normal    Scheduled                 pod/gcepd-client                                                 Successfully assigned volume-3746/gcepd-client to bootstrap-e2e-minion-group-l1kf\nvolume-3746                  33s         Normal    SuccessfulAttachVolume    pod/gcepd-client                                                 AttachVolume.Attach succeeded for volume \"gcepd-volume-0\"\nvolume-3746                  18s         Normal    Pulled                    pod/gcepd-client                                                 Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-3746                  18s         Normal    Created                   pod/gcepd-client                                                 Created container gcepd-client\nvolume-3746                  17s         Normal    Started                   pod/gcepd-client                                                 Started container gcepd-client\nvolume-3746                  5s          Normal    Killing                   pod/gcepd-client                                                 Stopping container gcepd-client\nvolume-3746                  87s         Normal    Scheduled                 pod/gcepd-injector                                               Successfully assigned volume-3746/gcepd-injector to bootstrap-e2e-minion-group-hs9p\nvolume-3746                  79s         Normal    SuccessfulAttachVolume    pod/gcepd-injector                                               AttachVolume.Attach succeeded for volume \"gcepd-volume-0\"\nvolume-3746                  71s         Normal    Pulled                    pod/gcepd-injector                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-3746                  70s         Normal    Created                   pod/gcepd-injector                                               Created container gcepd-injector\nvolume-3746                  69s         Normal    Started                   pod/gcepd-injector                                               Started container gcepd-injector\nvolume-3746                  52s         Normal    Killing                   pod/gcepd-injector                                               Stopping container gcepd-injector\nvolume-6261                  22s         Normal    Scheduled                 pod/gluster-client                                               Successfully assigned volume-6261/gluster-client to bootstrap-e2e-minion-group-l1kf\nvolume-6261                  17s         Normal    Pulled                    pod/gluster-client                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-6261                  17s         Normal    Created                   pod/gluster-client                                               Created container gluster-client\nvolume-6261                  15s         Normal    Started                   pod/gluster-client                                               Started container gluster-client\nvolume-6261                  6s          Normal    Killing                   pod/gluster-client                                               Stopping container gluster-client\nvolume-6261                  54s         Normal    Scheduled                 pod/gluster-injector                                             Successfully assigned volume-6261/gluster-injector to bootstrap-e2e-minion-group-hs9p\nvolume-6261                  50s         Normal    Pulled                    pod/gluster-injector                                             Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-6261                  50s         Normal    Created                   pod/gluster-injector                                             Created container gluster-injector\nvolume-6261                  49s         Normal    Started                   pod/gluster-injector                                             Started container gluster-injector\nvolume-6261                  35s         Normal    Killing                   pod/gluster-injector                                             Stopping container gluster-injector\nvolume-6261                  89s         Normal    Scheduled                 pod/gluster-server                                               Successfully assigned volume-6261/gluster-server to bootstrap-e2e-minion-group-l1kf\nvolume-6261                  86s         Normal    Pulling                   pod/gluster-server                                               Pulling image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\"\nvolume-6261                  69s         Normal    Pulled                    pod/gluster-server                                               Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\"\nvolume-6261                  69s         Normal    Created                   pod/gluster-server                                               Created container gluster-server\nvolume-6261                  69s         Normal    Started                   pod/gluster-server                                               Started container gluster-server\nvolume-6261                  65s         Warning   ProvisioningFailed        persistentvolumeclaim/pvc-5clft                                  storageclass.storage.k8s.io \"volume-6261\" not found\nvolume-8901                  24s         Normal    Scheduled                 pod/exec-volume-test-preprovisionedpv-8jlb                       Successfully assigned volume-8901/exec-volume-test-preprovisionedpv-8jlb to bootstrap-e2e-minion-group-cksd\nvolume-8901                  19s         Normal    SuccessfulAttachVolume    pod/exec-volume-test-preprovisionedpv-8jlb                       AttachVolume.Attach succeeded for volume \"gcepd-52jd5\"\nvolume-8901                  12s         Normal    Pulling                   pod/exec-volume-test-preprovisionedpv-8jlb                       Pulling image \"docker.io/library/nginx:1.14-alpine\"\nvolume-8901                  10s         Normal    Pulled                    pod/exec-volume-test-preprovisionedpv-8jlb                       Successfully pulled image \"docker.io/library/nginx:1.14-alpine\"\nvolume-8901                  10s         Normal    Created                   pod/exec-volume-test-preprovisionedpv-8jlb                       Created container exec-container-preprovisionedpv-8jlb\nvolume-8901                  9s          Normal    Started                   pod/exec-volume-test-preprovisionedpv-8jlb                       Started container exec-container-preprovisionedpv-8jlb\nvolume-8901                  40s         Warning   ProvisioningFailed        persistentvolumeclaim/pvc-lsmb2                                  storageclass.storage.k8s.io \"volume-8901\" not found\nvolume-expand-7397           3s          Warning   FailedCreate              statefulset/csi-hostpath-attacher                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-7397           3s          Warning   FailedCreate              statefulset/csi-hostpath-provisioner                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-7397           3s          Warning   FailedCreate              statefulset/csi-hostpath-resizer                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-7397           3s          Normal    ExternalProvisioning      persistentvolumeclaim/csi-hostpath7j47j                          waiting for a volume to be created, either by external provisioner \"csi-hostpath-volume-expand-7397\" or manually created by system administrator\nvolume-expand-7397           4s          Normal    SuccessfulCreate          statefulset/csi-hostpathplugin                                   create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nvolume-expand-8398           97s         Warning   FailedMount               pod/csi-hostpath-attacher-0                                      MountVolume.SetUp failed for volume \"csi-attacher-token-djdvb\" : failed to sync secret cache: timed out waiting for the condition\nvolume-expand-8398           94s         Normal    Pulling                   pod/csi-hostpath-attacher-0                                      Pulling image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\nvolume-expand-8398           82s         Normal    Pulled                    pod/csi-hostpath-attacher-0                                      Successfully pulled image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\nvolume-expand-8398           81s         Normal    Created                   pod/csi-hostpath-attacher-0                                      Created container csi-attacher\nvolume-expand-8398           80s         Normal    Started                   pod/csi-hostpath-attacher-0                                      Started container csi-attacher\nvolume-expand-8398           26s         Normal    Killing                   pod/csi-hostpath-attacher-0                                      Stopping container csi-attacher\nvolume-expand-8398           100s        Warning   FailedCreate              statefulset/csi-hostpath-attacher                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-8398           98s         Normal    SuccessfulCreate          statefulset/csi-hostpath-attacher                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nvolume-expand-8398           97s         Warning   FailedMount               pod/csi-hostpath-provisioner-0                                   MountVolume.SetUp failed for volume \"csi-provisioner-token-hxjvc\" : failed to sync secret cache: timed out waiting for the condition\nvolume-expand-8398           94s         Normal    Pulling                   pod/csi-hostpath-provisioner-0                                   Pulling image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\nvolume-expand-8398           82s         Normal    Pulled                    pod/csi-hostpath-provisioner-0                                   Successfully pulled image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\nvolume-expand-8398           82s         Normal    Created                   pod/csi-hostpath-provisioner-0                                   Created container csi-provisioner\nvolume-expand-8398           80s         Normal    Started                   pod/csi-hostpath-provisioner-0                                   Started container csi-provisioner\nvolume-expand-8398           26s         Normal    Killing                   pod/csi-hostpath-provisioner-0                                   Stopping container csi-provisioner\nvolume-expand-8398           99s         Warning   FailedCreate              statefulset/csi-hostpath-provisioner                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-8398           98s         Normal    SuccessfulCreate          statefulset/csi-hostpath-provisioner                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nvolume-expand-8398           97s         Warning   FailedMount               pod/csi-hostpath-resizer-0                                       MountVolume.SetUp failed for volume \"csi-resizer-token-k6kxm\" : failed to sync secret cache: timed out waiting for the condition\nvolume-expand-8398           94s         Normal    Pulling                   pod/csi-hostpath-resizer-0                                       Pulling image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\nvolume-expand-8398           82s         Normal    Pulled                    pod/csi-hostpath-resizer-0                                       Successfully pulled image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\nvolume-expand-8398           81s         Normal    Created                   pod/csi-hostpath-resizer-0                                       Created container csi-resizer\nvolume-expand-8398           80s         Normal    Started                   pod/csi-hostpath-resizer-0                                       Started container csi-resizer\nvolume-expand-8398           26s         Normal    Killing                   pod/csi-hostpath-resizer-0                                       Stopping container csi-resizer\nvolume-expand-8398           99s         Warning   FailedCreate              statefulset/csi-hostpath-resizer                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-8398           99s         Normal    SuccessfulCreate          statefulset/csi-hostpath-resizer                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nvolume-expand-8398           88s         Normal    ExternalProvisioning      persistentvolumeclaim/csi-hostpathg5psf                          waiting for a volume to be created, either by external provisioner \"csi-hostpath-volume-expand-8398\" or manually created by system administrator\nvolume-expand-8398           78s         Normal    Provisioning              persistentvolumeclaim/csi-hostpathg5psf                          External provisioner is provisioning volume for claim \"volume-expand-8398/csi-hostpathg5psf\"\nvolume-expand-8398           78s         Normal    ProvisioningSucceeded     persistentvolumeclaim/csi-hostpathg5psf                          Successfully provisioned volume pvc-a1fb27b7-1e97-4074-b624-d3b45e008953\nvolume-expand-8398           100s        Normal    Pulling                   pod/csi-hostpathplugin-0                                         Pulling image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\nvolume-expand-8398           94s         Normal    Pulled                    pod/csi-hostpathplugin-0                                         Successfully pulled image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\nvolume-expand-8398           93s         Normal    Created                   pod/csi-hostpathplugin-0                                         Created container node-driver-registrar\nvolume-expand-8398           92s         Normal    Started                   pod/csi-hostpathplugin-0                                         Started container node-driver-registrar\nvolume-expand-8398           92s         Normal    Pulling                   pod/csi-hostpathplugin-0                                         Pulling image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nvolume-expand-8398           80s         Normal    Pulled                    pod/csi-hostpathplugin-0                                         Successfully pulled image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nvolume-expand-8398           80s         Normal    Created                   pod/csi-hostpathplugin-0                                         Created container hostpath\nvolume-expand-8398           79s         Normal    Started                   pod/csi-hostpathplugin-0                                         Started container hostpath\nvolume-expand-8398           79s         Normal    Pulling                   pod/csi-hostpathplugin-0                                         Pulling image \"quay.io/k8scsi/livenessprobe:v1.1.0\"\nvolume-expand-8398           77s         Normal    Pulled                    pod/csi-hostpathplugin-0                                         Successfully pulled image \"quay.io/k8scsi/livenessprobe:v1.1.0\"\nvolume-expand-8398           76s         Normal    Created                   pod/csi-hostpathplugin-0                                         Created container liveness-probe\nvolume-expand-8398           76s         Normal    Started                   pod/csi-hostpathplugin-0                                         Started container liveness-probe\nvolume-expand-8398           26s         Normal    Killing                   pod/csi-hostpathplugin-0                                         Stopping container node-driver-registrar\nvolume-expand-8398           26s         Normal    Killing                   pod/csi-hostpathplugin-0                                         Stopping container liveness-probe\nvolume-expand-8398           26s         Normal    Killing                   pod/csi-hostpathplugin-0                                         Stopping container hostpath\nvolume-expand-8398           26s         Warning   Unhealthy                 pod/csi-hostpathplugin-0                                         Liveness probe failed: Get http://10.64.4.7:9898/healthz: dial tcp 10.64.4.7:9898: connect: connection refused\nvolume-expand-8398           25s         Warning   FailedPreStopHook         pod/csi-hostpathplugin-0                                         Exec lifecycle hook ([/bin/sh -c rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock]) for Container \"node-driver-registrar\" in Pod \"csi-hostpathplugin-0_volume-expand-8398(ae2166a9-8612-4715-b136-4777ad749c53)\" failed - error: command '/bin/sh -c rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock' exited with 126: , message: \"OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused \\\"exec: \\\\\\\"/bin/sh\\\\\\\": stat /bin/sh: no such file or directory\\\": unknown\\r\\n\"\nvolume-expand-8398           101s        Normal    SuccessfulCreate          statefulset/csi-hostpathplugin                                   create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nvolume-expand-8398           97s         Warning   FailedMount               pod/csi-snapshotter-0                                            MountVolume.SetUp failed for volume \"csi-snapshotter-token-dfn8z\" : failed to sync secret cache: timed out waiting for the condition\nvolume-expand-8398           94s         Normal    Pulling                   pod/csi-snapshotter-0                                            Pulling image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\"\nvolume-expand-8398           84s         Normal    Pulled                    pod/csi-snapshotter-0                                            Successfully pulled image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\"\nvolume-expand-8398           82s         Normal    Created                   pod/csi-snapshotter-0                                            Created container csi-snapshotter\nvolume-expand-8398           81s         Normal    Started                   pod/csi-snapshotter-0                                            Started container csi-snapshotter\nvolume-expand-8398           25s         Normal    Killing                   pod/csi-snapshotter-0                                            Stopping container csi-snapshotter\nvolume-expand-8398           99s         Warning   FailedCreate              statefulset/csi-snapshotter                                      create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter failed error: pods \"csi-snapshotter-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-8398           99s         Normal    SuccessfulCreate          statefulset/csi-snapshotter                                      create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nvolumemode-2959              63s         Normal    WaitForFirstConsumer      persistentvolumeclaim/gcepdslb4x                                 waiting for first consumer to be created before binding\nvolumemode-2959              59s         Normal    ProvisioningSucceeded     persistentvolumeclaim/gcepdslb4x                                 Successfully provisioned volume pvc-80fda8bf-5e35-4cdc-8b3b-869c14208e5d using kubernetes.io/gce-pd\nvolumemode-2959              48s         Normal    Pulled                    pod/hostexec-bootstrap-e2e-minion-group-hs9p-84tw6               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolumemode-2959              48s         Normal    Created                   pod/hostexec-bootstrap-e2e-minion-group-hs9p-84tw6               Created container agnhost\nvolumemode-2959              47s         Normal    Started                   pod/hostexec-bootstrap-e2e-minion-group-hs9p-84tw6               Started container agnhost\nvolumemode-2959              36s         Normal    Killing                   pod/hostexec-bootstrap-e2e-minion-group-hs9p-84tw6               Stopping container agnhost\nvolumemode-2959              58s         Normal    Scheduled                 pod/security-context-048b0a34-3266-424b-8854-40327b0dbe4f        Successfully assigned volumemode-2959/security-context-048b0a34-3266-424b-8854-40327b0dbe4f to bootstrap-e2e-minion-group-hs9p\nvolumemode-2959              56s         Normal    Pulled                    pod/security-context-048b0a34-3266-424b-8854-40327b0dbe4f        Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolumemode-2959              56s         Normal    Created                   pod/security-context-048b0a34-3266-424b-8854-40327b0dbe4f        Created container write-pod\nvolumemode-2959              55s         Normal    Started                   pod/security-context-048b0a34-3266-424b-8854-40327b0dbe4f        Started container write-pod\nvolumemode-2959              53s         Normal    SuccessfulAttachVolume    pod/security-context-048b0a34-3266-424b-8854-40327b0dbe4f        AttachVolume.Attach succeeded for volume \"pvc-80fda8bf-5e35-4cdc-8b3b-869c14208e5d\"\nvolumemode-2959              36s         Normal    Killing                   pod/security-context-048b0a34-3266-424b-8854-40327b0dbe4f        Stopping container write-pod\nvolumemode-5109              28s         Normal    Scheduled                 pod/external-provisioner-m5hmh                                   Successfully assigned volumemode-5109/external-provisioner-m5hmh to bootstrap-e2e-minion-group-l1kf\nvolumemode-5109              24s         Normal    Pulling                   pod/external-provisioner-m5hmh                                   Pulling image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\"\nvolumemode-8148              90s         Normal    Scheduled                 pod/gluster-server                                               Successfully assigned volumemode-8148/gluster-server to bootstrap-e2e-minion-group-hs9p\nvolumemode-8148              88s         Normal    Pulling                   pod/gluster-server                                               Pulling image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\"\nvolumemode-8148              58s         Normal    Pulled                    pod/gluster-server                                               Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\"\nvolumemode-8148              58s         Normal    Created                   pod/gluster-server                                               Created container gluster-server\nvolumemode-8148              56s         Normal    Started                   pod/gluster-server                                               Started container gluster-server\nvolumemode-8148              6s          Normal    Killing                   pod/gluster-server                                               Stopping container gluster-server\nvolumemode-8148              25s         Normal    Pulled                    pod/hostexec-bootstrap-e2e-minion-group-l1kf-glhkg               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolumemode-8148              25s         Normal    Created                   pod/hostexec-bootstrap-e2e-minion-group-l1kf-glhkg               Created container agnhost\nvolumemode-8148              25s         Normal    Started                   pod/hostexec-bootstrap-e2e-minion-group-l1kf-glhkg               Started container agnhost\nvolumemode-8148              15s         Normal    Killing                   pod/hostexec-bootstrap-e2e-minion-group-l1kf-glhkg               Stopping container agnhost\nvolumemode-8148              51s         Warning   ProvisioningFailed        persistentvolumeclaim/pvc-8gj7n                                  storageclass.storage.k8s.io \"volumemode-8148\" not found\nvolumemode-8148              38s         Normal    Scheduled                 pod/security-context-53d44518-2d4b-4382-8464-484e8a7b73a8        Successfully assigned volumemode-8148/security-context-53d44518-2d4b-4382-8464-484e8a7b73a8 to bootstrap-e2e-minion-group-l1kf\nvolumemode-8148              36s         Normal    Pulled                    pod/security-context-53d44518-2d4b-4382-8464-484e8a7b73a8        Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolumemode-8148              36s         Normal    Created                   pod/security-context-53d44518-2d4b-4382-8464-484e8a7b73a8        Created container write-pod\nvolumemode-8148              35s         Normal    Started                   pod/security-context-53d44518-2d4b-4382-8464-484e8a7b73a8        Started container write-pod\nvolumemode-8148              15s         Normal    Killing                   pod/security-context-53d44518-2d4b-4382-8464-484e8a7b73a8        Stopping container write-pod\nwebhook-1939                 36s         Normal    Scheduled                 pod/sample-webhook-deployment-5f65f8c764-qxx45                   Successfully assigned webhook-1939/sample-webhook-deployment-5f65f8c764-qxx45 to bootstrap-e2e-minion-group-l1kf\nwebhook-1939                 32s         Normal    Pulled                    pod/sample-webhook-deployment-5f65f8c764-qxx45                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nwebhook-1939                 31s         Normal    Created                   pod/sample-webhook-deployment-5f65f8c764-qxx45                   Created container sample-webhook\nwebhook-1939                 30s         Normal    Started                   pod/sample-webhook-deployment-5f65f8c764-qxx45                   Started container sample-webhook\nwebhook-1939                 36s         Normal    SuccessfulCreate          replicaset/sample-webhook-deployment-5f65f8c764                  Created pod: sample-webhook-deployment-5f65f8c764-qxx45\nwebhook-1939                 36s         Normal    ScalingReplicaSet         deployment/sample-webhook-deployment                             Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1\nwebhook-3926                 31s         Normal    Scheduled                 pod/sample-webhook-deployment-5f65f8c764-xv844                   Successfully assigned webhook-3926/sample-webhook-deployment-5f65f8c764-xv844 to bootstrap-e2e-minion-group-hs9p\nwebhook-3926                 30s         Warning   FailedMount               pod/sample-webhook-deployment-5f65f8c764-xv844                   MountVolume.SetUp failed for volume \"webhook-certs\" : failed to sync secret cache: timed out waiting for the condition\nwebhook-3926                 27s         Normal    Pulled                    pod/sample-webhook-deployment-5f65f8c764-xv844                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nwebhook-3926                 27s         Normal    Created                   pod/sample-webhook-deployment-5f65f8c764-xv844                   Created container sample-webhook\nwebhook-3926                 26s         Normal    Started                   pod/sample-webhook-deployment-5f65f8c764-xv844                   Started container sample-webhook\nwebhook-3926                 31s         Normal    SuccessfulCreate          replicaset/sample-webhook-deployment-5f65f8c764                  Created pod: sample-webhook-deployment-5f65f8c764-xv844\nwebhook-3926                 32s         Normal    ScalingReplicaSet         deployment/sample-webhook-deployment                             Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1\nwebhook-3926                 17s         Normal    Scheduled                 pod/webhook-to-be-mutated                                        Successfully assigned webhook-3926/webhook-to-be-mutated to bootstrap-e2e-minion-group-mp1q\nwebhook-3926                 16s         Normal    Pulling                   pod/webhook-to-be-mutated                                        Pulling image \"webhook-added-image\"\nwebhook-526                  38s         Normal    Scheduled                 pod/sample-webhook-deployment-5f65f8c764-94gws                   Successfully assigned webhook-526/sample-webhook-deployment-5f65f8c764-94gws to bootstrap-e2e-minion-group-l1kf\nwebhook-526                  36s         Normal    Pulled                    pod/sample-webhook-deployment-5f65f8c764-94gws                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nwebhook-526                  36s         Normal    Created                   pod/sample-webhook-deployment-5f65f8c764-94gws                   Created container sample-webhook\nwebhook-526                  35s         Normal    Started                   pod/sample-webhook-deployment-5f65f8c764-94gws                   Started container sample-webhook\nwebhook-526                  39s         Normal    SuccessfulCreate          replicaset/sample-webhook-deployment-5f65f8c764                  Created pod: sample-webhook-deployment-5f65f8c764-94gws\nwebhook-526                  39s         Normal    ScalingReplicaSet         deployment/sample-webhook-deployment                             Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1\n"
Jan 17 13:34:40.112: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config get replicationcontrollers --all-namespaces'
Jan 17 13:34:40.568: INFO: stderr: ""
Jan 17 13:34:40.568: INFO: stdout: "NAMESPACE       NAME                   DESIRED   CURRENT   READY   AGE\nkubectl-2531    rc1mt9p7dghkt          1         0         0       1s\nservices-5413   externalname-service   2         2         2       37s\n"
Jan 17 13:34:41.037: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config get podtemplates --all-namespaces'
Jan 17 13:34:41.401: INFO: stderr: ""
Jan 17 13:34:41.401: INFO: stdout: "NAMESPACE      NAME                CONTAINERS   IMAGES          POD LABELS\nkubectl-2531   pt1namemt9p7dghkt   container9   fedora:latest   pt=01\n"
... skipping 38 lines ...
Jan 17 13:34:55.408: INFO: stdout: "NAMESPACE      NAME                  DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                                                            AGE\nkube-system    fluentd-gcp-v3.2.0    5         5         5       5            5           beta.kubernetes.io/os=linux                                              5m26s\nkube-system    metadata-proxy-v0.1   5         5         5       5            5           beta.kubernetes.io/os=linux,cloud.google.com/metadata-proxy-ready=true   5m26s\nkubectl-2531   ds6mt9p7dghkt         0         0         0       0            0           <none>                                                                   1s\n"
Jan 17 13:34:56.776: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config get deployments --all-namespaces'
Jan 17 13:34:57.947: INFO: stderr: ""
Jan 17 13:34:57.948: INFO: stdout: "NAMESPACE         NAME                             READY   UP-TO-DATE   AVAILABLE   AGE\ndeployment-8681   test-rolling-update-deployment   1/1     1            1           39s\nkube-system       coredns                          2/2     2            2           5m29s\nkube-system       event-exporter-v0.3.1            1/1     1            1           5m29s\nkube-system       fluentd-gcp-scaler               1/1     1            1           5m22s\nkube-system       kube-dns-autoscaler              1/1     1            1           5m29s\nkube-system       kubernetes-dashboard             1/1     1            1           5m22s\nkube-system       l7-default-backend               1/1     1            1           5m29s\nkube-system       metrics-server-v0.3.6            1/1     1            1           5m26s\nkubectl-2531      deployment4mt9p7dghkt            0/1     0            0           1s\nwebhook-8375      sample-webhook-deployment        0/1     1            0           7s\nwebhook-8725      sample-webhook-deployment        0/1     0            0           2s\n"
Jan 17 13:34:58.672: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config get events --all-namespaces'
Jan 17 13:34:59.942: INFO: stderr: ""
Jan 17 13:34:59.942: INFO: stdout: "NAMESPACE                            LAST SEEN   TYPE      REASON                    OBJECT                                                           MESSAGE\nclientset-9013                       43s         Normal    Scheduled                 pod/poda58e2fe7-95ec-43ad-bef0-89a0698e7a70                      Successfully assigned clientset-9013/poda58e2fe7-95ec-43ad-bef0-89a0698e7a70 to bootstrap-e2e-minion-group-hs9p\nclientset-9013                       40s         Normal    Pulled                    pod/poda58e2fe7-95ec-43ad-bef0-89a0698e7a70                      Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nclientset-9013                       40s         Normal    Created                   pod/poda58e2fe7-95ec-43ad-bef0-89a0698e7a70                      Created container nginx\nclientset-9013                       38s         Normal    Started                   pod/poda58e2fe7-95ec-43ad-bef0-89a0698e7a70                      Started container nginx\nconfigmap-912                        28s         Normal    Scheduled                 pod/pod-configmaps-333c2d29-5eeb-4755-9852-42073b279c51          Successfully assigned configmap-912/pod-configmaps-333c2d29-5eeb-4755-9852-42073b279c51 to bootstrap-e2e-minion-group-cksd\nconfigmap-912                        25s         Normal    Pulled                    pod/pod-configmaps-333c2d29-5eeb-4755-9852-42073b279c51          Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nconfigmap-912                        25s         Normal    Created                   pod/pod-configmaps-333c2d29-5eeb-4755-9852-42073b279c51          Created container configmap-volume-test\nconfigmap-912                        24s         Normal    Started                   pod/pod-configmaps-333c2d29-5eeb-4755-9852-42073b279c51          Started container configmap-volume-test\ncontainer-probe-8429                 15s         Normal    Scheduled                 pod/liveness-02f5e306-9f37-436b-a3ad-8cd041eccab3                Successfully assigned container-probe-8429/liveness-02f5e306-9f37-436b-a3ad-8cd041eccab3 to bootstrap-e2e-minion-group-hs9p\ncontainer-probe-8429                 11s         Normal    Pulled                    pod/liveness-02f5e306-9f37-436b-a3ad-8cd041eccab3                Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ncontainer-probe-8429                 11s         Normal    Created                   pod/liveness-02f5e306-9f37-436b-a3ad-8cd041eccab3                Created container liveness\ncontainer-probe-8429                 9s          Normal    Started                   pod/liveness-02f5e306-9f37-436b-a3ad-8cd041eccab3                Started container liveness\ncontainer-probe-9310                 2m3s        Normal    Scheduled                 pod/busybox-4ef43f01-cad3-4189-875e-d9443eab5ce0                 Successfully assigned container-probe-9310/busybox-4ef43f01-cad3-4189-875e-d9443eab5ce0 to bootstrap-e2e-minion-group-hs9p\ncontainer-probe-9310                 2m2s        Normal    Pulling                   pod/busybox-4ef43f01-cad3-4189-875e-d9443eab5ce0                 Pulling image \"docker.io/library/busybox:1.29\"\ncontainer-probe-9310                 2m1s        Normal    Pulled                    pod/busybox-4ef43f01-cad3-4189-875e-d9443eab5ce0                 Successfully pulled image \"docker.io/library/busybox:1.29\"\ncontainer-probe-9310                 2m1s        Normal    Created                   pod/busybox-4ef43f01-cad3-4189-875e-d9443eab5ce0                 Created container busybox\ncontainer-probe-9310                 2m1s        Normal    Started                   pod/busybox-4ef43f01-cad3-4189-875e-d9443eab5ce0                 Started container busybox\ncsi-mock-volumes-6067                109s        Normal    Pulling                   pod/csi-mockplugin-0                                             Pulling image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\ncsi-mock-volumes-6067                103s        Normal    Pulled                    pod/csi-mockplugin-0                                             Successfully pulled image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\ncsi-mock-volumes-6067                102s        Normal    Created                   pod/csi-mockplugin-0                                             Created container csi-provisioner\ncsi-mock-volumes-6067                101s        Normal    Started                   pod/csi-mockplugin-0                                             Started container csi-provisioner\ncsi-mock-volumes-6067                101s        Normal    Pulled                    pod/csi-mockplugin-0                                             Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\ncsi-mock-volumes-6067                101s        Normal    Created                   pod/csi-mockplugin-0                                             Created container driver-registrar\ncsi-mock-volumes-6067                100s        Normal    Started                   pod/csi-mockplugin-0                                             Started container driver-registrar\ncsi-mock-volumes-6067                100s        Normal    Pulling                   pod/csi-mockplugin-0                                             Pulling image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-6067                97s         Normal    Pulled                    pod/csi-mockplugin-0                                             Successfully pulled image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-6067                97s         Normal    Created                   pod/csi-mockplugin-0                                             Created container mock\ncsi-mock-volumes-6067                97s         Normal    Started                   pod/csi-mockplugin-0                                             Started container mock\ncsi-mock-volumes-6067                109s        Normal    Pulling                   pod/csi-mockplugin-attacher-0                                    Pulling image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\ncsi-mock-volumes-6067                103s        Normal    Pulled                    pod/csi-mockplugin-attacher-0                                    Successfully pulled image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\ncsi-mock-volumes-6067                103s        Normal    Created                   pod/csi-mockplugin-attacher-0                                    Created container csi-attacher\ncsi-mock-volumes-6067                101s        Normal    Started                   pod/csi-mockplugin-attacher-0                                    Started container csi-attacher\ncsi-mock-volumes-6067                113s        Normal    SuccessfulCreate          statefulset/csi-mockplugin-attacher                              create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\ncsi-mock-volumes-6067                113s        Normal    SuccessfulCreate          statefulset/csi-mockplugin                                       create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-6067                108s        Normal    ExternalProvisioning      persistentvolumeclaim/pvc-hrxkc                                  waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-6067\" or manually created by system administrator\ncsi-mock-volumes-6067                96s         Normal    Provisioning              persistentvolumeclaim/pvc-hrxkc                                  External provisioner is provisioning volume for claim \"csi-mock-volumes-6067/pvc-hrxkc\"\ncsi-mock-volumes-6067                96s         Normal    ProvisioningSucceeded     persistentvolumeclaim/pvc-hrxkc                                  Successfully provisioned volume pvc-702b0b1f-3f68-4a88-8ee5-d4d784438dbe\ncsi-mock-volumes-6067                92s         Normal    SuccessfulAttachVolume    pod/pvc-volume-tester-kpm8w                                      AttachVolume.Attach succeeded for volume \"pvc-702b0b1f-3f68-4a88-8ee5-d4d784438dbe\"\ncsi-mock-volumes-6067                74s         Normal    Pulled                    pod/pvc-volume-tester-kpm8w                                      Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-6067                74s         Normal    Created                   pod/pvc-volume-tester-kpm8w                                      Created container volume-tester\ncsi-mock-volumes-6067                73s         Normal    Started                   pod/pvc-volume-tester-kpm8w                                      Started container volume-tester\ncsi-mock-volumes-6067                71s         Normal    Killing                   pod/pvc-volume-tester-kpm8w                                      Stopping container volume-tester\ndefault                              5m19s       Normal    RegisteredNode            node/bootstrap-e2e-master                                        Node bootstrap-e2e-master event: Registered Node bootstrap-e2e-master in Controller\ndefault                              5m19s       Normal    Starting                  node/bootstrap-e2e-minion-group-cksd                             Starting kubelet.\ndefault                              5m18s       Normal    NodeHasSufficientMemory   node/bootstrap-e2e-minion-group-cksd                             Node bootstrap-e2e-minion-group-cksd status is now: NodeHasSufficientMemory\ndefault                              5m18s       Normal    NodeHasNoDiskPressure     node/bootstrap-e2e-minion-group-cksd                             Node bootstrap-e2e-minion-group-cksd status is now: NodeHasNoDiskPressure\ndefault                              5m18s       Normal    NodeHasSufficientPID      node/bootstrap-e2e-minion-group-cksd                             Node bootstrap-e2e-minion-group-cksd status is now: NodeHasSufficientPID\ndefault                              5m19s       Normal    NodeAllocatableEnforced   node/bootstrap-e2e-minion-group-cksd                             Updated Node Allocatable limit across pods\ndefault                              5m17s       Warning   ContainerdStart           node/bootstrap-e2e-minion-group-cksd                             Starting containerd container runtime...\ndefault                              5m17s       Warning   DockerStart               node/bootstrap-e2e-minion-group-cksd                             Starting Docker Application Container Engine...\ndefault                              5m17s       Warning   KubeletStart              node/bootstrap-e2e-minion-group-cksd                             Started Kubernetes kubelet.\ndefault                              5m17s       Normal    NodeReady                 node/bootstrap-e2e-minion-group-cksd                             Node bootstrap-e2e-minion-group-cksd status is now: NodeReady\ndefault                              5m16s       Normal    Starting                  node/bootstrap-e2e-minion-group-cksd                             Starting kube-proxy.\ndefault                              5m14s       Normal    RegisteredNode            node/bootstrap-e2e-minion-group-cksd                             Node bootstrap-e2e-minion-group-cksd event: Registered Node bootstrap-e2e-minion-group-cksd in Controller\ndefault                              5m19s       Normal    Starting                  node/bootstrap-e2e-minion-group-hs9p                             Starting kubelet.\ndefault                              5m19s       Normal    NodeHasSufficientMemory   node/bootstrap-e2e-minion-group-hs9p                             Node bootstrap-e2e-minion-group-hs9p status is now: NodeHasSufficientMemory\ndefault                              5m19s       Normal    NodeHasNoDiskPressure     node/bootstrap-e2e-minion-group-hs9p                             Node bootstrap-e2e-minion-group-hs9p status is now: NodeHasNoDiskPressure\ndefault                              5m19s       Normal    NodeHasSufficientPID      node/bootstrap-e2e-minion-group-hs9p                             Node bootstrap-e2e-minion-group-hs9p status is now: NodeHasSufficientPID\ndefault                              5m19s       Normal    NodeAllocatableEnforced   node/bootstrap-e2e-minion-group-hs9p                             Updated Node Allocatable limit across pods\ndefault                              5m19s       Normal    RegisteredNode            node/bootstrap-e2e-minion-group-hs9p                             Node bootstrap-e2e-minion-group-hs9p event: Registered Node bootstrap-e2e-minion-group-hs9p in Controller\ndefault                              5m16s       Warning   ContainerdStart           node/bootstrap-e2e-minion-group-hs9p                             Starting containerd container runtime...\ndefault                              5m16s       Warning   DockerStart               node/bootstrap-e2e-minion-group-hs9p                             Starting Docker Application Container Engine...\ndefault                              5m16s       Normal    Starting                  node/bootstrap-e2e-minion-group-hs9p                             Starting kube-proxy.\ndefault                              5m16s       Warning   KubeletStart              node/bootstrap-e2e-minion-group-hs9p                             Started Kubernetes kubelet.\ndefault                              5m9s        Normal    NodeReady                 node/bootstrap-e2e-minion-group-hs9p                             Node bootstrap-e2e-minion-group-hs9p status is now: NodeReady\ndefault                              5m18s       Normal    Starting                  node/bootstrap-e2e-minion-group-l1kf                             Starting kubelet.\ndefault                              5m18s       Normal    NodeHasSufficientMemory   node/bootstrap-e2e-minion-group-l1kf                             Node bootstrap-e2e-minion-group-l1kf status is now: NodeHasSufficientMemory\ndefault                              5m18s       Normal    NodeHasNoDiskPressure     node/bootstrap-e2e-minion-group-l1kf                             Node bootstrap-e2e-minion-group-l1kf status is now: NodeHasNoDiskPressure\ndefault                              5m18s       Normal    NodeHasSufficientPID      node/bootstrap-e2e-minion-group-l1kf                             Node bootstrap-e2e-minion-group-l1kf status is now: NodeHasSufficientPID\ndefault                              5m18s       Normal    NodeAllocatableEnforced   node/bootstrap-e2e-minion-group-l1kf                             Updated Node Allocatable limit across pods\ndefault                              5m17s       Normal    NodeReady                 node/bootstrap-e2e-minion-group-l1kf                             Node bootstrap-e2e-minion-group-l1kf status is now: NodeReady\ndefault                              5m16s       Normal    Starting                  node/bootstrap-e2e-minion-group-l1kf                             Starting kube-proxy.\ndefault                              5m15s       Warning   ContainerdStart           node/bootstrap-e2e-minion-group-l1kf                             Starting containerd container runtime...\ndefault                              5m15s       Warning   DockerStart               node/bootstrap-e2e-minion-group-l1kf                             Starting Docker Application Container Engine...\ndefault                              5m15s       Warning   KubeletStart              node/bootstrap-e2e-minion-group-l1kf                             Started Kubernetes kubelet.\ndefault                              5m14s       Normal    RegisteredNode            node/bootstrap-e2e-minion-group-l1kf                             Node bootstrap-e2e-minion-group-l1kf event: Registered Node bootstrap-e2e-minion-group-l1kf in Controller\ndefault                              5m17s       Normal    Starting                  node/bootstrap-e2e-minion-group-mp1q                             Starting kubelet.\ndefault                              5m17s       Normal    NodeHasSufficientMemory   node/bootstrap-e2e-minion-group-mp1q                             Node bootstrap-e2e-minion-group-mp1q status is now: NodeHasSufficientMemory\ndefault                              5m17s       Normal    NodeHasNoDiskPressure     node/bootstrap-e2e-minion-group-mp1q                             Node bootstrap-e2e-minion-group-mp1q status is now: NodeHasNoDiskPressure\ndefault                              5m17s       Normal    NodeHasSufficientPID      node/bootstrap-e2e-minion-group-mp1q                             Node bootstrap-e2e-minion-group-mp1q status is now: NodeHasSufficientPID\ndefault                              5m17s       Normal    NodeAllocatableEnforced   node/bootstrap-e2e-minion-group-mp1q                             Updated Node Allocatable limit across pods\ndefault                              5m16s       Normal    NodeReady                 node/bootstrap-e2e-minion-group-mp1q                             Node bootstrap-e2e-minion-group-mp1q status is now: NodeReady\ndefault                              5m15s       Warning   ContainerdStart           node/bootstrap-e2e-minion-group-mp1q                             Starting containerd container runtime...\ndefault                              5m15s       Warning   DockerStart               node/bootstrap-e2e-minion-group-mp1q                             Starting Docker Application Container Engine...\ndefault                              5m15s       Warning   KubeletStart              node/bootstrap-e2e-minion-group-mp1q                             Started Kubernetes kubelet.\ndefault                              5m15s       Normal    Starting                  node/bootstrap-e2e-minion-group-mp1q                             Starting kube-proxy.\ndefault                              5m14s       Normal    RegisteredNode            node/bootstrap-e2e-minion-group-mp1q                             Node bootstrap-e2e-minion-group-mp1q event: Registered Node bootstrap-e2e-minion-group-mp1q in Controller\ndefault                              3s          Warning   FailedToCreateEndpoint    endpoints/csi-snapshotter                                        Failed to create endpoint for service provisioning-8819/csi-snapshotter: endpoints \"csi-snapshotter\" already exists\ndefault                              44s         Normal    VolumeDelete              persistentvolume/pvc-80fda8bf-5e35-4cdc-8b3b-869c14208e5d        googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-80fda8bf-5e35-4cdc-8b3b-869c14208e5d' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-hs9p', resourceInUseByAnotherResource\ndefault                              76s         Normal    VolumeDelete              persistentvolume/pvc-cc1af00c-efdc-48d1-a7e3-768ed69fd2d7        googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-cc1af00c-efdc-48d1-a7e3-768ed69fd2d7' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-l1kf', resourceInUseByAnotherResource\ndefault                              70s         Normal    VolumeDelete              persistentvolume/pvc-d33f4f03-546d-4c84-b6e5-b1f7c6e4e55d        googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-d33f4f03-546d-4c84-b6e5-b1f7c6e4e55d' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-hs9p', resourceInUseByAnotherResource\ndeployment-8681                      51s         Normal    Scheduled                 pod/test-rolling-update-controller-w7fxq                         Successfully assigned deployment-8681/test-rolling-update-controller-w7fxq to bootstrap-e2e-minion-group-hs9p\ndeployment-8681                      50s         Warning   FailedMount               pod/test-rolling-update-controller-w7fxq                         MountVolume.SetUp failed for volume \"default-token-4h25c\" : failed to sync secret cache: timed out waiting for the condition\ndeployment-8681                      46s         Normal    Pulled                    pod/test-rolling-update-controller-w7fxq                         Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-8681                      45s         Normal    Created                   pod/test-rolling-update-controller-w7fxq                         Created container httpd\ndeployment-8681                      44s         Normal    Started                   pod/test-rolling-update-controller-w7fxq                         Started container httpd\ndeployment-8681                      33s         Normal    Killing                   pod/test-rolling-update-controller-w7fxq                         Stopping container httpd\ndeployment-8681                      51s         Normal    SuccessfulCreate          replicaset/test-rolling-update-controller                        Created pod: test-rolling-update-controller-w7fxq\ndeployment-8681                      33s         Normal    SuccessfulDelete          replicaset/test-rolling-update-controller                        Deleted pod: test-rolling-update-controller-w7fxq\ndeployment-8681                      39s         Normal    Scheduled                 pod/test-rolling-update-deployment-67cf4f6444-7trmk              Successfully assigned deployment-8681/test-rolling-update-deployment-67cf4f6444-7trmk to bootstrap-e2e-minion-group-cksd\ndeployment-8681                      39s         Normal    Pulled                    pod/test-rolling-update-deployment-67cf4f6444-7trmk              Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ndeployment-8681                      38s         Normal    Created                   pod/test-rolling-update-deployment-67cf4f6444-7trmk              Created container agnhost\ndeployment-8681                      38s         Normal    Started                   pod/test-rolling-update-deployment-67cf4f6444-7trmk              Started container agnhost\ndeployment-8681                      40s         Normal    SuccessfulCreate          replicaset/test-rolling-update-deployment-67cf4f6444             Created pod: test-rolling-update-deployment-67cf4f6444-7trmk\ndeployment-8681                      40s         Normal    ScalingReplicaSet         deployment/test-rolling-update-deployment                        Scaled up replica set test-rolling-update-deployment-67cf4f6444 to 1\ndeployment-8681                      33s         Normal    ScalingReplicaSet         deployment/test-rolling-update-deployment                        Scaled down replica set test-rolling-update-controller to 0\ndisruption-2144                      43s         Normal    Scheduled                 pod/pod-0                                                        Successfully assigned disruption-2144/pod-0 to bootstrap-e2e-minion-group-l1kf\ndisruption-2144                      40s         Normal    Pulling                   pod/pod-0                                                        Pulling image \"gcr.io/kubernetes-e2e-test-images/echoserver:2.2\"\ndisruption-2144                      31s         Normal    Pulled                    pod/pod-0                                                        Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/echoserver:2.2\"\ndisruption-2144                      31s         Normal    Created                   pod/pod-0                                                        Created container busybox\ndisruption-2144                      30s         Normal    Started                   pod/pod-0                                                        Started container busybox\ndisruption-2144                      26s         Normal    Killing                   pod/pod-0                                                        Stopping container busybox\ndisruption-3450                      4s          Normal    NoPods                    poddisruptionbudget/foo                                          No matching pods found\ndisruption-6770                      14s         Normal    NoPods                    poddisruptionbudget/foo                                          No matching pods found\ndisruption-6770                      4s          Warning   FailedScheduling          pod/rs-2dwld                                                     0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) didn't have free ports for the requested pod ports.\ndisruption-6770                      3s          Warning   FailedScheduling          pod/rs-4kf6r                                                     0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) didn't have free ports for the requested pod ports.\ndisruption-6770                      13s         Normal    Scheduled                 pod/rs-56x9s                                                     Successfully assigned disruption-6770/rs-56x9s to bootstrap-e2e-minion-group-l1kf\ndisruption-6770                      11s         Normal    Pulled                    pod/rs-56x9s                                                     Container image \"gcr.io/kubernetes-e2e-test-images/echoserver:2.2\" already present on machine\ndisruption-6770                      10s         Normal    Created                   pod/rs-56x9s                                                     Created container busybox\ndisruption-6770                      9s          Normal    Started                   pod/rs-56x9s                                                     Started container busybox\ndisruption-6770                      14s         Normal    Scheduled                 pod/rs-6mxtf                                                     Successfully assigned disruption-6770/rs-6mxtf to bootstrap-e2e-minion-group-mp1q\ndisruption-6770                      11s         Normal    Pulling                   pod/rs-6mxtf                                                     Pulling image \"gcr.io/kubernetes-e2e-test-images/echoserver:2.2\"\ndisruption-6770                      1s          Warning   FailedScheduling          pod/rs-9hxqh                                                     0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) didn't have free ports for the requested pod ports.\ndisruption-6770                      14s         Normal    Scheduled                 pod/rs-gf5hd                                                     Successfully assigned disruption-6770/rs-gf5hd to bootstrap-e2e-minion-group-hs9p\ndisruption-6770                      10s         Normal    Pulling                   pod/rs-gf5hd                                                     Pulling image \"gcr.io/kubernetes-e2e-test-images/echoserver:2.2\"\ndisruption-6770                      3s          Warning   FailedScheduling          pod/rs-m62lt                                                     0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) didn't have free ports for the requested pod ports.\ndisruption-6770                      2s          Warning   FailedScheduling          pod/rs-psf29                                                     0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) didn't have free ports for the requested pod ports.\ndisruption-6770                      14s         Normal    Scheduled                 pod/rs-t6d5c                                                     Successfully assigned disruption-6770/rs-t6d5c to bootstrap-e2e-minion-group-cksd\ndisruption-6770                      11s         Normal    Pulling                   pod/rs-t6d5c                                                     Pulling image \"gcr.io/kubernetes-e2e-test-images/echoserver:2.2\"\ndisruption-6770                      2s          Warning   FailedScheduling          pod/rs-wrbmr                                                     0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) didn't have free ports for the requested pod ports.\ndisruption-6770                      14s         Normal    SuccessfulCreate          replicaset/rs                                                    Created pod: rs-t6d5c\ndisruption-6770                      14s         Normal    SuccessfulCreate          replicaset/rs                                                    Created pod: rs-gf5hd\ndisruption-6770                      14s         Normal    SuccessfulCreate          replicaset/rs                                                    Created pod: rs-6mxtf\ndisruption-6770                      14s         Normal    SuccessfulCreate          replicaset/rs                                                    Created pod: rs-m62lt\ndisruption-6770                      14s         Normal    SuccessfulCreate          replicaset/rs                                                    Created pod: rs-2dwld\ndisruption-6770                      14s         Normal    SuccessfulCreate          replicaset/rs                                                    Created pod: rs-4kf6r\ndisruption-6770                      14s         Normal    SuccessfulCreate          replicaset/rs                                                    Created pod: rs-56x9s\ndisruption-6770                      13s         Normal    SuccessfulCreate          replicaset/rs                                                    Created pod: rs-wrbmr\ndisruption-6770                      13s         Normal    SuccessfulCreate          replicaset/rs                                                    Created pod: rs-psf29\ndisruption-6770                      13s         Normal    SuccessfulCreate          replicaset/rs                                                    (combined from similar events): Created pod: rs-9hxqh\ngc-4013                              32s         Normal    Scheduled                 pod/simpletest.deployment-7ccb84659c-8r2zq                       Successfully assigned gc-4013/simpletest.deployment-7ccb84659c-8r2zq to bootstrap-e2e-minion-group-cksd\ngc-4013                              30s         Normal    Pulled                    pod/simpletest.deployment-7ccb84659c-8r2zq                       Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-4013                              30s         Normal    Created                   pod/simpletest.deployment-7ccb84659c-8r2zq                       Created container nginx\ngc-4013                              29s         Normal    Started                   pod/simpletest.deployment-7ccb84659c-8r2zq                       Started container nginx\ngc-4013                              32s         Normal    Scheduled                 pod/simpletest.deployment-7ccb84659c-nk26k                       Successfully assigned gc-4013/simpletest.deployment-7ccb84659c-nk26k to bootstrap-e2e-minion-group-cksd\ngc-4013                              30s         Normal    Pulled                    pod/simpletest.deployment-7ccb84659c-nk26k                       Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-4013                              30s         Normal    Created                   pod/simpletest.deployment-7ccb84659c-nk26k                       Created container nginx\ngc-4013                              29s         Normal    Started                   pod/simpletest.deployment-7ccb84659c-nk26k                       Started container nginx\ngc-4013                              32s         Normal    SuccessfulCreate          replicaset/simpletest.deployment-7ccb84659c                      Created pod: simpletest.deployment-7ccb84659c-nk26k\ngc-4013                              32s         Normal    SuccessfulCreate          replicaset/simpletest.deployment-7ccb84659c                      Created pod: simpletest.deployment-7ccb84659c-8r2zq\ngc-4013                              33s         Normal    ScalingReplicaSet         deployment/simpletest.deployment                                 Scaled up replica set simpletest.deployment-7ccb84659c to 2\ngc-6805                              11s         Normal    Scheduled                 pod/simpletest-rc-to-be-deleted-2qkmz                            Successfully assigned gc-6805/simpletest-rc-to-be-deleted-2qkmz to bootstrap-e2e-minion-group-mp1q\ngc-6805                              7s          Normal    Pulling                   pod/simpletest-rc-to-be-deleted-2qkmz                            Pulling image \"docker.io/library/nginx:1.14-alpine\"\ngc-6805                              3s          Normal    Pulled                    pod/simpletest-rc-to-be-deleted-2qkmz                            Successfully pulled image \"docker.io/library/nginx:1.14-alpine\"\ngc-6805                              3s          Normal    Created                   pod/simpletest-rc-to-be-deleted-2qkmz                            Created container nginx\ngc-6805                              11s         Normal    Scheduled                 pod/simpletest-rc-to-be-deleted-4gzz9                            Successfully assigned gc-6805/simpletest-rc-to-be-deleted-4gzz9 to bootstrap-e2e-minion-group-cksd\ngc-6805                              10s         Warning   FailedMount               pod/simpletest-rc-to-be-deleted-4gzz9                            MountVolume.SetUp failed for volume \"default-token-82kt2\" : failed to sync secret cache: timed out waiting for the condition\ngc-6805                              5s          Normal    Pulled                    pod/simpletest-rc-to-be-deleted-4gzz9                            Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-6805                              4s          Normal    Created                   pod/simpletest-rc-to-be-deleted-4gzz9                            Created container nginx\ngc-6805                              2s          Normal    Started                   pod/simpletest-rc-to-be-deleted-4gzz9                            Started container nginx\ngc-6805                              11s         Normal    Scheduled                 pod/simpletest-rc-to-be-deleted-6rl42                            Successfully assigned gc-6805/simpletest-rc-to-be-deleted-6rl42 to bootstrap-e2e-minion-group-cksd\ngc-6805                              5s          Normal    Pulled                    pod/simpletest-rc-to-be-deleted-6rl42                            Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-6805                              4s          Normal    Created                   pod/simpletest-rc-to-be-deleted-6rl42                            Created container nginx\ngc-6805                              3s          Normal    Started                   pod/simpletest-rc-to-be-deleted-6rl42                            Started container nginx\ngc-6805                              11s         Normal    Scheduled                 pod/simpletest-rc-to-be-deleted-cp848                            Successfully assigned gc-6805/simpletest-rc-to-be-deleted-cp848 to bootstrap-e2e-minion-group-mp1q\ngc-6805                              7s          Normal    Pulling                   pod/simpletest-rc-to-be-deleted-cp848                            Pulling image \"docker.io/library/nginx:1.14-alpine\"\ngc-6805                              3s          Normal    Pulled                    pod/simpletest-rc-to-be-deleted-cp848                            Successfully pulled image \"docker.io/library/nginx:1.14-alpine\"\ngc-6805                              3s          Normal    Created                   pod/simpletest-rc-to-be-deleted-cp848                            Created container nginx\ngc-6805                              11s         Normal    Scheduled                 pod/simpletest-rc-to-be-deleted-dd65w                            Successfully assigned gc-6805/simpletest-rc-to-be-deleted-dd65w to bootstrap-e2e-minion-group-cksd\ngc-6805                              10s         Warning   FailedMount               pod/simpletest-rc-to-be-deleted-dd65w                            MountVolume.SetUp failed for volume \"default-token-82kt2\" : failed to sync secret cache: timed out waiting for the condition\ngc-6805                              6s          Normal    Pulled                    pod/simpletest-rc-to-be-deleted-dd65w                            Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-6805                              6s          Normal    Created                   pod/simpletest-rc-to-be-deleted-dd65w                            Created container nginx\ngc-6805                              4s          Normal    Started                   pod/simpletest-rc-to-be-deleted-dd65w                            Started container nginx\ngc-6805                              10s         Normal    Scheduled                 pod/simpletest-rc-to-be-deleted-dfdjh                            Successfully assigned gc-6805/simpletest-rc-to-be-deleted-dfdjh to bootstrap-e2e-minion-group-hs9p\ngc-6805                              6s          Normal    Pulled                    pod/simpletest-rc-to-be-deleted-dfdjh                            Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-6805                              6s          Normal    Created                   pod/simpletest-rc-to-be-deleted-dfdjh                            Created container nginx\ngc-6805                              5s          Normal    Started                   pod/simpletest-rc-to-be-deleted-dfdjh                            Started container nginx\ngc-6805                              4s          Normal    Killing                   pod/simpletest-rc-to-be-deleted-dfdjh                            Stopping container nginx\ngc-6805                              2s          Warning   FailedMount               pod/simpletest-rc-to-be-deleted-dfdjh                            MountVolume.SetUp failed for volume \"default-token-82kt2\" : object \"gc-6805\"/\"default-token-82kt2\" not registered\ngc-6805                              10s         Normal    Scheduled                 pod/simpletest-rc-to-be-deleted-gs6sf                            Successfully assigned gc-6805/simpletest-rc-to-be-deleted-gs6sf to bootstrap-e2e-minion-group-cksd\ngc-6805                              5s          Normal    Pulled                    pod/simpletest-rc-to-be-deleted-gs6sf                            Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-6805                              4s          Normal    Created                   pod/simpletest-rc-to-be-deleted-gs6sf                            Created container nginx\ngc-6805                              2s          Warning   Failed                    pod/simpletest-rc-to-be-deleted-gs6sf                            Error: failed to start container \"nginx\": Error response from daemon: OCI runtime state failed: container_linux.go:1807: checking if container is paused caused \"read /sys/fs/cgroup/freezer/kubepods/besteffort/pod3d08d204-0c01-4fa3-8d21-1e9a4ea33547/nginx/freezer.state: no such device\": unknown\ngc-6805                              10s         Normal    Scheduled                 pod/simpletest-rc-to-be-deleted-mb6sx                            Successfully assigned gc-6805/simpletest-rc-to-be-deleted-mb6sx to bootstrap-e2e-minion-group-mp1q\ngc-6805                              7s          Normal    Pulling                   pod/simpletest-rc-to-be-deleted-mb6sx                            Pulling image \"docker.io/library/nginx:1.14-alpine\"\ngc-6805                              3s          Normal    Pulled                    pod/simpletest-rc-to-be-deleted-mb6sx                            Successfully pulled image \"docker.io/library/nginx:1.14-alpine\"\ngc-6805                              3s          Warning   Failed                    pod/simpletest-rc-to-be-deleted-mb6sx                            Error: cannot find volume \"default-token-82kt2\" to mount into container \"nginx\"\ngc-6805                              11s         Normal    Scheduled                 pod/simpletest-rc-to-be-deleted-njmww                            Successfully assigned gc-6805/simpletest-rc-to-be-deleted-njmww to bootstrap-e2e-minion-group-hs9p\ngc-6805                              7s          Normal    Pulled                    pod/simpletest-rc-to-be-deleted-njmww                            Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-6805                              7s          Normal    Created                   pod/simpletest-rc-to-be-deleted-njmww                            Created container nginx\ngc-6805                              6s          Normal    Started                   pod/simpletest-rc-to-be-deleted-njmww                            Started container nginx\ngc-6805                              4s          Normal    Killing                   pod/simpletest-rc-to-be-deleted-njmww                            Stopping container nginx\ngc-6805                              11s         Normal    Scheduled                 pod/simpletest-rc-to-be-deleted-vk4v6                            Successfully assigned gc-6805/simpletest-rc-to-be-deleted-vk4v6 to bootstrap-e2e-minion-group-hs9p\ngc-6805                              7s          Normal    Pulled                    pod/simpletest-rc-to-be-deleted-vk4v6                            Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-6805                              6s          Normal    Created                   pod/simpletest-rc-to-be-deleted-vk4v6                            Created container nginx\ngc-6805                              5s          Normal    Started                   pod/simpletest-rc-to-be-deleted-vk4v6                            Started container nginx\ngc-6805                              4s          Normal    Killing                   pod/simpletest-rc-to-be-deleted-vk4v6                            Stopping container nginx\ngc-6805                              11s         Normal    SuccessfulCreate          replicationcontroller/simpletest-rc-to-be-deleted                Created pod: simpletest-rc-to-be-deleted-dd65w\ngc-6805                              11s         Normal    SuccessfulCreate          replicationcontroller/simpletest-rc-to-be-deleted                Created pod: simpletest-rc-to-be-deleted-4gzz9\ngc-6805                              11s         Normal    SuccessfulCreate          replicationcontroller/simpletest-rc-to-be-deleted                Created pod: simpletest-rc-to-be-deleted-2qkmz\ngc-6805                              11s         Normal    SuccessfulCreate          replicationcontroller/simpletest-rc-to-be-deleted                Created pod: simpletest-rc-to-be-deleted-6rl42\ngc-6805                              11s         Normal    SuccessfulCreate          replicationcontroller/simpletest-rc-to-be-deleted                Created pod: simpletest-rc-to-be-deleted-cp848\ngc-6805                              11s         Normal    SuccessfulCreate          replicationcontroller/simpletest-rc-to-be-deleted                Created pod: simpletest-rc-to-be-deleted-njmww\ngc-6805                              11s         Normal    SuccessfulCreate          replicationcontroller/simpletest-rc-to-be-deleted                Created pod: simpletest-rc-to-be-deleted-vk4v6\ngc-6805                              10s         Normal    SuccessfulCreate          replicationcontroller/simpletest-rc-to-be-deleted                Created pod: simpletest-rc-to-be-deleted-mb6sx\ngc-6805                              10s         Normal    SuccessfulCreate          replicationcontroller/simpletest-rc-to-be-deleted                Created pod: simpletest-rc-to-be-deleted-gs6sf\ngc-6805                              9s          Normal    SuccessfulCreate          replicationcontroller/simpletest-rc-to-be-deleted                (combined from similar events): Created pod: simpletest-rc-to-be-deleted-dfdjh\ngcp-volume-9546                      19s         Normal    Scheduled                 pod/gluster-client                                               Successfully assigned gcp-volume-9546/gluster-client to bootstrap-e2e-minion-group-cksd\ngcp-volume-9546                      15s         Normal    Pulled                    pod/gluster-client                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\ngcp-volume-9546                      15s         Normal    Created                   pod/gluster-client                                               Created container gluster-client\ngcp-volume-9546                      14s         Normal    Started                   pod/gluster-client                                               Started container gluster-client\ngcp-volume-9546                      1s          Normal    Killing                   pod/gluster-client                                               Stopping container gluster-client\ngcp-volume-9546                      27s         Normal    Scheduled                 pod/gluster-server                                               Successfully assigned gcp-volume-9546/gluster-server to bootstrap-e2e-minion-group-hs9p\ngcp-volume-9546                      25s         Normal    Pulled                    pod/gluster-server                                               Container image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\" already present on machine\ngcp-volume-9546                      25s         Normal    Created                   pod/gluster-server                                               Created container gluster-server\ngcp-volume-9546                      24s         Normal    Started                   pod/gluster-server                                               Started container gluster-server\njob-2740                             18s         Normal    Pulled                    pod/fail-once-non-local-7hsz7                                    Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-2740                             18s         Normal    Created                   pod/fail-once-non-local-7hsz7                                    Created container c\njob-2740                             17s         Normal    Started                   pod/fail-once-non-local-7hsz7                                    Started container c\njob-2740                             15s         Normal    Pulled                    pod/fail-once-non-local-7m8sl                                    Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-2740                             14s         Normal    Created                   pod/fail-once-non-local-7m8sl                                    Created container c\njob-2740                             13s         Normal    Started                   pod/fail-once-non-local-7m8sl                                    Started container c\njob-2740                             17s         Normal    Pulled                    pod/fail-once-non-local-9zgtz                                    Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-2740                             17s         Normal    Created                   pod/fail-once-non-local-9zgtz                                    Created container c\njob-2740                             16s         Normal    Started                   pod/fail-once-non-local-9zgtz                                    Started container c\njob-2740                             23s         Normal    Pulled                    pod/fail-once-non-local-fbpdc                                    Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-2740                             23s         Normal    Created                   pod/fail-once-non-local-fbpdc                                    Created container c\njob-2740                             23s         Normal    Started                   pod/fail-once-non-local-fbpdc                                    Started container c\njob-2740                             23s         Normal    Pulled                    pod/fail-once-non-local-vqp68                                    Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-2740                             23s         Normal    Created                   pod/fail-once-non-local-vqp68                                    Created container c\njob-2740                             23s         Normal    Started                   pod/fail-once-non-local-vqp68                                    Started container c\njob-2740                             26s         Normal    SuccessfulCreate          job/fail-once-non-local                                          Created pod: fail-once-non-local-fbpdc\njob-2740                             26s         Normal    SuccessfulCreate          job/fail-once-non-local                                          Created pod: fail-once-non-local-vqp68\njob-2740                             20s         Normal    SuccessfulCreate          job/fail-once-non-local                                          Created pod: fail-once-non-local-7hsz7\njob-2740                             19s         Normal    SuccessfulCreate          job/fail-once-non-local                                          Created pod: fail-once-non-local-9zgtz\njob-2740                             16s         Normal    SuccessfulCreate          job/fail-once-non-local                                          Created pod: fail-once-non-local-7m8sl\njob-2740                             12s         Normal    Completed                 job/fail-once-non-local                                          Job completed\njob-3474                             47s         Normal    Scheduled                 pod/fail-once-local-lk8nv                                        Successfully assigned job-3474/fail-once-local-lk8nv to bootstrap-e2e-minion-group-l1kf\njob-3474                             39s         Normal    Pulled                    pod/fail-once-local-lk8nv                                        Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-3474                             39s         Normal    Created                   pod/fail-once-local-lk8nv                                        Created container c\njob-3474                             38s         Normal    Started                   pod/fail-once-local-lk8nv                                        Started container c\njob-3474                             35s         Normal    SandboxChanged            pod/fail-once-local-lk8nv                                        Pod sandbox changed, it will be killed and re-created.\njob-3474                             34s         Warning   FailedCreatePodSandBox    pod/fail-once-local-lk8nv                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod \"fail-once-local-lk8nv\": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused \"process_linux.go:430: container init caused \\\"process_linux.go:413: running prestart hook 0 caused \\\\\\\"error running hook: exit status 1, stdout: , stderr: time=\\\\\\\\\\\\\\\"2020-01-17T13:34:25Z\\\\\\\\\\\\\\\" level=fatal msg=\\\\\\\\\\\\\\\"no such file or directory\\\\\\\\\\\\\\\"\\\\\\\\n\\\\\\\"\\\"\": unknown\njob-3474                             58s         Normal    Scheduled                 pod/fail-once-local-qxkn4                                        Successfully assigned job-3474/fail-once-local-qxkn4 to bootstrap-e2e-minion-group-l1kf\njob-3474                             50s         Normal    Pulled                    pod/fail-once-local-qxkn4                                        Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-3474                             50s         Normal    Created                   pod/fail-once-local-qxkn4                                        Created container c\njob-3474                             49s         Normal    Started                   pod/fail-once-local-qxkn4                                        Started container c\njob-3474                             48s         Normal    Scheduled                 pod/fail-once-local-rq2cm                                        Successfully assigned job-3474/fail-once-local-rq2cm to bootstrap-e2e-minion-group-l1kf\njob-3474                             43s         Normal    Pulled                    pod/fail-once-local-rq2cm                                        Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-3474                             43s         Normal    Created                   pod/fail-once-local-rq2cm                                        Created container c\njob-3474                             41s         Normal    Started                   pod/fail-once-local-rq2cm                                        Started container c\njob-3474                             58s         Normal    Scheduled                 pod/fail-once-local-v7mzs                                        Successfully assigned job-3474/fail-once-local-v7mzs to bootstrap-e2e-minion-group-l1kf\njob-3474                             51s         Normal    Pulled                    pod/fail-once-local-v7mzs                                        Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-3474                             51s         Normal    Created                   pod/fail-once-local-v7mzs                                        Created container c\njob-3474                             50s         Normal    Started                   pod/fail-once-local-v7mzs                                        Started container c\njob-3474                             49s         Normal    SandboxChanged            pod/fail-once-local-v7mzs                                        Pod sandbox changed, it will be killed and re-created.\njob-3474                             47s         Warning   FailedCreatePodSandBox    pod/fail-once-local-v7mzs                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod \"fail-once-local-v7mzs\": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused \"process_linux.go:430: container init caused \\\"process_linux.go:413: running prestart hook 0 caused \\\\\\\"error running hook: exit status 1, stdout: , stderr: time=\\\\\\\\\\\\\\\"2020-01-17T13:34:12Z\\\\\\\\\\\\\\\" level=fatal msg=\\\\\\\\\\\\\\\"no such file or directory\\\\\\\\\\\\\\\"\\\\\\\\n\\\\\\\"\\\"\": unknown\njob-3474                             58s         Normal    SuccessfulCreate          job/fail-once-local                                              Created pod: fail-once-local-v7mzs\njob-3474                             58s         Normal    SuccessfulCreate          job/fail-once-local                                              Created pod: fail-once-local-qxkn4\njob-3474                             49s         Normal    SuccessfulCreate          job/fail-once-local                                              Created pod: fail-once-local-rq2cm\njob-3474                             47s         Normal    SuccessfulCreate          job/fail-once-local                                              Created pod: fail-once-local-lk8nv\njob-3474                             36s         Normal    Completed                 job/fail-once-local                                              Job completed\nkube-system                          4m55s       Normal    Scheduled                 pod/coredns-65567c7b57-sbrn5                                     Successfully assigned kube-system/coredns-65567c7b57-sbrn5 to bootstrap-e2e-minion-group-cksd\nkube-system                          4m54s       Normal    Pulling                   pod/coredns-65567c7b57-sbrn5                                     Pulling image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          4m52s       Normal    Pulled                    pod/coredns-65567c7b57-sbrn5                                     Successfully pulled image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          4m52s       Normal    Created                   pod/coredns-65567c7b57-sbrn5                                     Created container coredns\nkube-system                          4m52s       Normal    Started                   pod/coredns-65567c7b57-sbrn5                                     Started container coredns\nkube-system                          5m26s       Warning   FailedScheduling          pod/coredns-65567c7b57-vgx2l                                     no nodes available to schedule pods\nkube-system                          5m20s       Warning   FailedScheduling          pod/coredns-65567c7b57-vgx2l                                     0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          5m11s       Warning   FailedScheduling          pod/coredns-65567c7b57-vgx2l                                     0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          5m2s        Normal    Scheduled                 pod/coredns-65567c7b57-vgx2l                                     Successfully assigned kube-system/coredns-65567c7b57-vgx2l to bootstrap-e2e-minion-group-l1kf\nkube-system                          5m1s        Normal    Pulling                   pod/coredns-65567c7b57-vgx2l                                     Pulling image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          4m59s       Normal    Pulled                    pod/coredns-65567c7b57-vgx2l                                     Successfully pulled image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          4m59s       Normal    Created                   pod/coredns-65567c7b57-vgx2l                                     Created container coredns\nkube-system                          4m59s       Normal    Started                   pod/coredns-65567c7b57-vgx2l                                     Started container coredns\nkube-system                          5m31s       Warning   FailedCreate              replicaset/coredns-65567c7b57                                    Error creating: pods \"coredns-65567c7b57-\" is forbidden: no providers available to validate pod request\nkube-system                          5m28s       Warning   FailedCreate              replicaset/coredns-65567c7b57                                    Error creating: pods \"coredns-65567c7b57-\" is forbidden: unable to validate against any pod security policy: []\nkube-system                          5m26s       Normal    SuccessfulCreate          replicaset/coredns-65567c7b57                                    Created pod: coredns-65567c7b57-vgx2l\nkube-system                          4m56s       Normal    SuccessfulCreate          replicaset/coredns-65567c7b57                                    Created pod: coredns-65567c7b57-sbrn5\nkube-system                          5m31s       Normal    ScalingReplicaSet         deployment/coredns                                               Scaled up replica set coredns-65567c7b57 to 1\nkube-system                          4m56s       Normal    ScalingReplicaSet         deployment/coredns                                               Scaled up replica set coredns-65567c7b57 to 2\nkube-system                          5m28s       Warning   FailedScheduling          pod/event-exporter-v0.3.1-747b47fcd-757kq                        no nodes available to schedule pods\nkube-system                          5m8s        Warning   FailedScheduling          pod/event-exporter-v0.3.1-747b47fcd-757kq                        0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          5m5s        Normal    Scheduled                 pod/event-exporter-v0.3.1-747b47fcd-757kq                        Successfully assigned kube-system/event-exporter-v0.3.1-747b47fcd-757kq to bootstrap-e2e-minion-group-hs9p\nkube-system                          5m3s        Normal    Pulling                   pod/event-exporter-v0.3.1-747b47fcd-757kq                        Pulling image \"k8s.gcr.io/event-exporter:v0.3.1\"\nkube-system                          5m1s        Normal    Pulled                    pod/event-exporter-v0.3.1-747b47fcd-757kq                        Successfully pulled image \"k8s.gcr.io/event-exporter:v0.3.1\"\nkube-system                          5m1s        Normal    Created                   pod/event-exporter-v0.3.1-747b47fcd-757kq                        Created container event-exporter\nkube-system                          5m          Normal    Started                   pod/event-exporter-v0.3.1-747b47fcd-757kq                        Started container event-exporter\nkube-system                          5m          Normal    Pulling                   pod/event-exporter-v0.3.1-747b47fcd-757kq                        Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.7.2\"\nkube-system                          4m59s       Normal    Pulled                    pod/event-exporter-v0.3.1-747b47fcd-757kq                        Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.7.2\"\nkube-system                          4m58s       Normal    Created                   pod/event-exporter-v0.3.1-747b47fcd-757kq                        Created container prometheus-to-sd-exporter\nkube-system                          4m58s       Normal    Started                   pod/event-exporter-v0.3.1-747b47fcd-757kq                        Started container prometheus-to-sd-exporter\nkube-system                          5m31s       Normal    SuccessfulCreate          replicaset/event-exporter-v0.3.1-747b47fcd                       Created pod: event-exporter-v0.3.1-747b47fcd-757kq\nkube-system                          5m31s       Normal    ScalingReplicaSet         deployment/event-exporter-v0.3.1                                 Scaled up replica set event-exporter-v0.3.1-747b47fcd to 1\nkube-system                          5m24s       Warning   FailedScheduling          pod/fluentd-gcp-scaler-76d9c77b4d-v7lw9                          no nodes available to schedule pods\nkube-system                          5m8s        Warning   FailedScheduling          pod/fluentd-gcp-scaler-76d9c77b4d-v7lw9                          0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          5m5s        Normal    Scheduled                 pod/fluentd-gcp-scaler-76d9c77b4d-v7lw9                          Successfully assigned kube-system/fluentd-gcp-scaler-76d9c77b4d-v7lw9 to bootstrap-e2e-minion-group-mp1q\nkube-system                          5m3s        Normal    Pulling                   pod/fluentd-gcp-scaler-76d9c77b4d-v7lw9                          Pulling image \"k8s.gcr.io/fluentd-gcp-scaler:0.5.2\"\nkube-system                          4m57s       Normal    Pulled                    pod/fluentd-gcp-scaler-76d9c77b4d-v7lw9                          Successfully pulled image \"k8s.gcr.io/fluentd-gcp-scaler:0.5.2\"\nkube-system                          4m56s       Normal    Created                   pod/fluentd-gcp-scaler-76d9c77b4d-v7lw9                          Created container fluentd-gcp-scaler\nkube-system                          4m56s       Normal    Started                   pod/fluentd-gcp-scaler-76d9c77b4d-v7lw9                          Started container fluentd-gcp-scaler\nkube-system                          5m24s       Normal    SuccessfulCreate          replicaset/fluentd-gcp-scaler-76d9c77b4d                         Created pod: fluentd-gcp-scaler-76d9c77b4d-v7lw9\nkube-system                          5m24s       Normal    ScalingReplicaSet         deployment/fluentd-gcp-scaler                                    Scaled up replica set fluentd-gcp-scaler-76d9c77b4d to 1\nkube-system                          4m22s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-2j564                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-2j564 to bootstrap-e2e-minion-group-l1kf\nkube-system                          4m21s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-2j564                                     Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          4m21s       Normal    Created                   pod/fluentd-gcp-v3.2.0-2j564                                     Created container fluentd-gcp\nkube-system                          4m21s       Normal    Started                   pod/fluentd-gcp-v3.2.0-2j564                                     Started container fluentd-gcp\nkube-system                          4m21s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-2j564                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          4m21s       Normal    Created                   pod/fluentd-gcp-v3.2.0-2j564                                     Created container prometheus-to-sd-exporter\nkube-system                          4m20s       Normal    Started                   pod/fluentd-gcp-v3.2.0-2j564                                     Started container prometheus-to-sd-exporter\nkube-system                          5m17s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-6q62x                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-6q62x to bootstrap-e2e-minion-group-cksd\nkube-system                          5m16s       Warning   FailedMount               pod/fluentd-gcp-v3.2.0-6q62x                                     MountVolume.SetUp failed for volume \"fluentd-gcp-token-4mg77\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          5m16s       Warning   FailedMount               pod/fluentd-gcp-v3.2.0-6q62x                                     MountVolume.SetUp failed for volume \"config-volume\" : failed to sync configmap cache: timed out waiting for the condition\nkube-system                          5m15s       Normal    Pulling                   pod/fluentd-gcp-v3.2.0-6q62x                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          5m6s        Normal    Pulled                    pod/fluentd-gcp-v3.2.0-6q62x                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          5m6s        Normal    Created                   pod/fluentd-gcp-v3.2.0-6q62x                                     Created container fluentd-gcp\nkube-system                          5m5s        Normal    Started                   pod/fluentd-gcp-v3.2.0-6q62x                                     Started container fluentd-gcp\nkube-system                          5m5s        Normal    Pulled                    pod/fluentd-gcp-v3.2.0-6q62x                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          5m5s        Normal    Created                   pod/fluentd-gcp-v3.2.0-6q62x                                     Created container prometheus-to-sd-exporter\nkube-system                          5m5s        Normal    Started                   pod/fluentd-gcp-v3.2.0-6q62x                                     Started container prometheus-to-sd-exporter\nkube-system                          4m8s        Normal    Killing                   pod/fluentd-gcp-v3.2.0-6q62x                                     Stopping container fluentd-gcp\nkube-system                          4m8s        Normal    Killing                   pod/fluentd-gcp-v3.2.0-6q62x                                     Stopping container prometheus-to-sd-exporter\nkube-system                          4m9s        Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-cgd45                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-cgd45 to bootstrap-e2e-minion-group-hs9p\nkube-system                          4m9s        Normal    Pulled                    pod/fluentd-gcp-v3.2.0-cgd45                                     Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          4m9s        Normal    Created                   pod/fluentd-gcp-v3.2.0-cgd45                                     Created container fluentd-gcp\nkube-system                          4m8s        Normal    Started                   pod/fluentd-gcp-v3.2.0-cgd45                                     Started container fluentd-gcp\nkube-system                          4m8s        Normal    Pulled                    pod/fluentd-gcp-v3.2.0-cgd45                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          4m8s        Normal    Created                   pod/fluentd-gcp-v3.2.0-cgd45                                     Created container prometheus-to-sd-exporter\nkube-system                          4m8s        Normal    Started                   pod/fluentd-gcp-v3.2.0-cgd45                                     Started container prometheus-to-sd-exporter\nkube-system                          4m27s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-kr7d8                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-kr7d8 to bootstrap-e2e-minion-group-mp1q\nkube-system                          4m27s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-kr7d8                                     Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          4m26s       Normal    Created                   pod/fluentd-gcp-v3.2.0-kr7d8                                     Created container fluentd-gcp\nkube-system                          4m26s       Normal    Started                   pod/fluentd-gcp-v3.2.0-kr7d8                                     Started container fluentd-gcp\nkube-system                          4m26s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-kr7d8                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          4m26s       Normal    Created                   pod/fluentd-gcp-v3.2.0-kr7d8                                     Created container prometheus-to-sd-exporter\nkube-system                          4m26s       Normal    Started                   pod/fluentd-gcp-v3.2.0-kr7d8                                     Started container prometheus-to-sd-exporter\nkube-system                          5m18s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-pxfq4                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-pxfq4 to bootstrap-e2e-minion-group-hs9p\nkube-system                          5m17s       Normal    Pulling                   pod/fluentd-gcp-v3.2.0-pxfq4                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          5m6s        Normal    Pulled                    pod/fluentd-gcp-v3.2.0-pxfq4                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          5m6s        Normal    Created                   pod/fluentd-gcp-v3.2.0-pxfq4                                     Created container fluentd-gcp\nkube-system                          5m6s        Normal    Started                   pod/fluentd-gcp-v3.2.0-pxfq4                                     Started container fluentd-gcp\nkube-system                          5m6s        Normal    Pulled                    pod/fluentd-gcp-v3.2.0-pxfq4                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          5m6s        Normal    Created                   pod/fluentd-gcp-v3.2.0-pxfq4                                     Created container prometheus-to-sd-exporter\nkube-system                          5m6s        Normal    Started                   pod/fluentd-gcp-v3.2.0-pxfq4                                     Started container prometheus-to-sd-exporter\nkube-system                          4m20s       Normal    Killing                   pod/fluentd-gcp-v3.2.0-pxfq4                                     Stopping container fluentd-gcp\nkube-system                          4m20s       Normal    Killing                   pod/fluentd-gcp-v3.2.0-pxfq4                                     Stopping container prometheus-to-sd-exporter\nkube-system                          3m59s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-tqmf5                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-tqmf5 to bootstrap-e2e-minion-group-cksd\nkube-system                          3m58s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-tqmf5                                     Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          3m58s       Normal    Created                   pod/fluentd-gcp-v3.2.0-tqmf5                                     Created container fluentd-gcp\nkube-system                          3m58s       Normal    Started                   pod/fluentd-gcp-v3.2.0-tqmf5                                     Started container fluentd-gcp\nkube-system                          3m58s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-tqmf5                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          3m58s       Normal    Created                   pod/fluentd-gcp-v3.2.0-tqmf5                                     Created container prometheus-to-sd-exporter\nkube-system                          3m57s       Normal    Started                   pod/fluentd-gcp-v3.2.0-tqmf5                                     Started container prometheus-to-sd-exporter\nkube-system                          5m16s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-wdzg7                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-wdzg7 to bootstrap-e2e-minion-group-l1kf\nkube-system                          5m15s       Normal    Pulling                   pod/fluentd-gcp-v3.2.0-wdzg7                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          5m6s        Normal    Pulled                    pod/fluentd-gcp-v3.2.0-wdzg7                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          5m6s        Normal    Created                   pod/fluentd-gcp-v3.2.0-wdzg7                                     Created container fluentd-gcp\nkube-system                          5m5s        Normal    Started                   pod/fluentd-gcp-v3.2.0-wdzg7                                     Started container fluentd-gcp\nkube-system                          5m5s        Normal    Pulled                    pod/fluentd-gcp-v3.2.0-wdzg7                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          5m5s        Normal    Created                   pod/fluentd-gcp-v3.2.0-wdzg7                                     Created container prometheus-to-sd-exporter\nkube-system                          5m5s        Normal    Started                   pod/fluentd-gcp-v3.2.0-wdzg7                                     Started container prometheus-to-sd-exporter\nkube-system                          4m26s       Normal    Killing                   pod/fluentd-gcp-v3.2.0-wdzg7                                     Stopping container fluentd-gcp\nkube-system                          4m26s       Normal    Killing                   pod/fluentd-gcp-v3.2.0-wdzg7                                     Stopping container prometheus-to-sd-exporter\nkube-system                          5m19s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-wxzbs                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-wxzbs to bootstrap-e2e-master\nkube-system                          5m12s       Normal    Pulling                   pod/fluentd-gcp-v3.2.0-wxzbs                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          4m52s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-wxzbs                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          4m50s       Normal    Created                   pod/fluentd-gcp-v3.2.0-wxzbs                                     Created container fluentd-gcp\nkube-system                          4m50s       Warning   Failed                    pod/fluentd-gcp-v3.2.0-wxzbs                                     Error: failed to start container \"fluentd-gcp\": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused \"process_linux.go:430: container init caused \\\"rootfs_linux.go:58: mounting \\\\\\\"/var/lib/kubelet/pods/d6ca37cc-405d-4a79-a6b9-ed5a5527bb94/volumes/kubernetes.io~configmap/config-volume\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/overlay2/aef5e712b104a3b370ec2a08cec85628d088a1105518e800124113c69ab128b0/merged\\\\\\\" at \\\\\\\"/etc/google-fluentd/config.d\\\\\\\" caused \\\\\\\"stat /var/lib/kubelet/pods/d6ca37cc-405d-4a79-a6b9-ed5a5527bb94/volumes/kubernetes.io~configmap/config-volume: no such file or directory\\\\\\\"\\\"\": unknown\nkube-system                          4m50s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-wxzbs                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          4m50s       Warning   Failed                    pod/fluentd-gcp-v3.2.0-wxzbs                                     Error: cannot find volume \"fluentd-gcp-token-4mg77\" to mount into container \"prometheus-to-sd-exporter\"\nkube-system                          2m47s       Warning   FailedMount               pod/fluentd-gcp-v3.2.0-wxzbs                                     Unable to attach or mount volumes: unmounted volumes=[varlog varlibdockercontainers config-volume fluentd-gcp-token-4mg77], unattached volumes=[varlog varlibdockercontainers config-volume fluentd-gcp-token-4mg77]: timed out waiting for the condition\nkube-system                          5m15s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-z6wfx                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-z6wfx to bootstrap-e2e-minion-group-mp1q\nkube-system                          5m14s       Normal    Pulling                   pod/fluentd-gcp-v3.2.0-z6wfx                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          5m4s        Normal    Pulled                    pod/fluentd-gcp-v3.2.0-z6wfx                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          5m4s        Normal    Created                   pod/fluentd-gcp-v3.2.0-z6wfx                                     Created container fluentd-gcp\nkube-system                          5m3s        Normal    Started                   pod/fluentd-gcp-v3.2.0-z6wfx                                     Started container fluentd-gcp\nkube-system                          5m3s        Normal    Pulled                    pod/fluentd-gcp-v3.2.0-z6wfx                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          5m3s        Normal    Created                   pod/fluentd-gcp-v3.2.0-z6wfx                                     Created container prometheus-to-sd-exporter\nkube-system                          5m3s        Normal    Started                   pod/fluentd-gcp-v3.2.0-z6wfx                                     Started container prometheus-to-sd-exporter\nkube-system                          4m39s       Normal    Killing                   pod/fluentd-gcp-v3.2.0-z6wfx                                     Stopping container fluentd-gcp\nkube-system                          4m39s       Normal    Killing                   pod/fluentd-gcp-v3.2.0-z6wfx                                     Stopping container prometheus-to-sd-exporter\nkube-system                          4m48s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-zkr7k                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-zkr7k to bootstrap-e2e-master\nkube-system                          4m47s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-zkr7k                                     Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          4m47s       Normal    Created                   pod/fluentd-gcp-v3.2.0-zkr7k                                     Created container fluentd-gcp\nkube-system                          4m46s       Normal    Started                   pod/fluentd-gcp-v3.2.0-zkr7k                                     Started container fluentd-gcp\nkube-system                          4m46s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-zkr7k                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          4m46s       Normal    Created                   pod/fluentd-gcp-v3.2.0-zkr7k                                     Created container prometheus-to-sd-exporter\nkube-system                          4m40s       Normal    Started                   pod/fluentd-gcp-v3.2.0-zkr7k                                     Started container prometheus-to-sd-exporter\nkube-system                          5m19s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-wxzbs\nkube-system                          5m19s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-pxfq4\nkube-system                          5m18s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-6q62x\nkube-system                          5m17s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-wdzg7\nkube-system                          5m16s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-z6wfx\nkube-system                          4m50s       Normal    SuccessfulDelete          daemonset/fluentd-gcp-v3.2.0                                     Deleted pod: fluentd-gcp-v3.2.0-wxzbs\nkube-system                          4m48s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-zkr7k\nkube-system                          4m39s       Normal    SuccessfulDelete          daemonset/fluentd-gcp-v3.2.0                                     Deleted pod: fluentd-gcp-v3.2.0-z6wfx\nkube-system                          4m27s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-kr7d8\nkube-system                          4m26s       Normal    SuccessfulDelete          daemonset/fluentd-gcp-v3.2.0                                     Deleted pod: fluentd-gcp-v3.2.0-wdzg7\nkube-system                          4m22s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-2j564\nkube-system                          4m20s       Normal    SuccessfulDelete          daemonset/fluentd-gcp-v3.2.0                                     Deleted pod: fluentd-gcp-v3.2.0-pxfq4\nkube-system                          4m9s        Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-cgd45\nkube-system                          4m8s        Normal    SuccessfulDelete          daemonset/fluentd-gcp-v3.2.0                                     Deleted pod: fluentd-gcp-v3.2.0-6q62x\nkube-system                          3m59s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                     (combined from similar events): Created pod: fluentd-gcp-v3.2.0-tqmf5\nkube-system                          5m9s        Normal    LeaderElection            configmap/ingress-gce-lock                                       bootstrap-e2e-master_707d7 became leader\nkube-system                          5m50s       Normal    LeaderElection            endpoints/kube-controller-manager                                bootstrap-e2e-master_1f41b409-083b-4f59-9fa4-872a8b500782 became leader\nkube-system                          5m50s       Normal    LeaderElection            lease/kube-controller-manager                                    bootstrap-e2e-master_1f41b409-083b-4f59-9fa4-872a8b500782 became leader\nkube-system                          5m20s       Warning   FailedScheduling          pod/kube-dns-autoscaler-65bc6d4889-ml5rx                         no nodes available to schedule pods\nkube-system                          5m18s       Warning   FailedScheduling          pod/kube-dns-autoscaler-65bc6d4889-ml5rx                         0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) had taints that the pod didn't tolerate.\nkube-system                          5m10s       Warning   FailedScheduling          pod/kube-dns-autoscaler-65bc6d4889-ml5rx                         0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          5m1s        Normal    Scheduled                 pod/kube-dns-autoscaler-65bc6d4889-ml5rx                         Successfully assigned kube-system/kube-dns-autoscaler-65bc6d4889-ml5rx to bootstrap-e2e-minion-group-cksd\nkube-system                          5m          Normal    Pulling                   pod/kube-dns-autoscaler-65bc6d4889-ml5rx                         Pulling image \"k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1\"\nkube-system                          4m58s       Normal    Pulled                    pod/kube-dns-autoscaler-65bc6d4889-ml5rx                         Successfully pulled image \"k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1\"\nkube-system                          4m57s       Normal    Created                   pod/kube-dns-autoscaler-65bc6d4889-ml5rx                         Created container autoscaler\nkube-system                          4m57s       Normal    Started                   pod/kube-dns-autoscaler-65bc6d4889-ml5rx                         Started container autoscaler\nkube-system                          5m25s       Warning   FailedCreate              replicaset/kube-dns-autoscaler-65bc6d4889                        Error creating: pods \"kube-dns-autoscaler-65bc6d4889-\" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount \"kube-dns-autoscaler\" not found\nkube-system                          5m20s       Normal    SuccessfulCreate          replicaset/kube-dns-autoscaler-65bc6d4889                        Created pod: kube-dns-autoscaler-65bc6d4889-ml5rx\nkube-system                          5m31s       Normal    ScalingReplicaSet         deployment/kube-dns-autoscaler                                   Scaled up replica set kube-dns-autoscaler-65bc6d4889 to 1\nkube-system                          5m17s       Normal    Pulled                    pod/kube-proxy-bootstrap-e2e-minion-group-cksd                   Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.854_6278df2a972d2c\" already present on machine\nkube-system                          5m17s       Normal    Created                   pod/kube-proxy-bootstrap-e2e-minion-group-cksd                   Created container kube-proxy\nkube-system                          5m17s       Normal    Started                   pod/kube-proxy-bootstrap-e2e-minion-group-cksd                   Started container kube-proxy\nkube-system                          5m17s       Normal    Pulled                    pod/kube-proxy-bootstrap-e2e-minion-group-hs9p                   Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.854_6278df2a972d2c\" already present on machine\nkube-system                          5m17s       Normal    Created                   pod/kube-proxy-bootstrap-e2e-minion-group-hs9p                   Created container kube-proxy\nkube-system                          5m17s       Normal    Started                   pod/kube-proxy-bootstrap-e2e-minion-group-hs9p                   Started container kube-proxy\nkube-system                          5m16s       Normal    Pulled                    pod/kube-proxy-bootstrap-e2e-minion-group-l1kf                   Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.854_6278df2a972d2c\" already present on machine\nkube-system                          5m16s       Normal    Created                   pod/kube-proxy-bootstrap-e2e-minion-group-l1kf                   Created container kube-proxy\nkube-system                          5m16s       Normal    Started                   pod/kube-proxy-bootstrap-e2e-minion-group-l1kf                   Started container kube-proxy\nkube-system                          5m16s       Normal    Pulled                    pod/kube-proxy-bootstrap-e2e-minion-group-mp1q                   Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.854_6278df2a972d2c\" already present on machine\nkube-system                          5m16s       Normal    Created                   pod/kube-proxy-bootstrap-e2e-minion-group-mp1q                   Created container kube-proxy\nkube-system                          5m15s       Normal    Started                   pod/kube-proxy-bootstrap-e2e-minion-group-mp1q                   Started container kube-proxy\nkube-system                          5m53s       Normal    LeaderElection            endpoints/kube-scheduler                                         bootstrap-e2e-master_02d65249-3a22-48c1-916c-fed1fcef458e became leader\nkube-system                          5m53s       Normal    LeaderElection            lease/kube-scheduler                                             bootstrap-e2e-master_02d65249-3a22-48c1-916c-fed1fcef458e became leader\nkube-system                          5m24s       Warning   FailedScheduling          pod/kubernetes-dashboard-7778f8b456-tkqpc                        no nodes available to schedule pods\nkube-system                          5m19s       Warning   FailedScheduling          pod/kubernetes-dashboard-7778f8b456-tkqpc                        0/2 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 1 node(s) were unschedulable.\nkube-system                          5m10s       Warning   FailedScheduling          pod/kubernetes-dashboard-7778f8b456-tkqpc                        0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          5m1s        Normal    Scheduled                 pod/kubernetes-dashboard-7778f8b456-tkqpc                        Successfully assigned kube-system/kubernetes-dashboard-7778f8b456-tkqpc to bootstrap-e2e-minion-group-mp1q\nkube-system                          4m58s       Normal    Pulling                   pod/kubernetes-dashboard-7778f8b456-tkqpc                        Pulling image \"k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1\"\nkube-system                          4m53s       Normal    Pulled                    pod/kubernetes-dashboard-7778f8b456-tkqpc                        Successfully pulled image \"k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1\"\nkube-system                          4m52s       Normal    Created                   pod/kubernetes-dashboard-7778f8b456-tkqpc                        Created container kubernetes-dashboard\nkube-system                          4m51s       Normal    Started                   pod/kubernetes-dashboard-7778f8b456-tkqpc                        Started container kubernetes-dashboard\nkube-system                          5m24s       Normal    SuccessfulCreate          replicaset/kubernetes-dashboard-7778f8b456                       Created pod: kubernetes-dashboard-7778f8b456-tkqpc\nkube-system                          5m24s       Normal    ScalingReplicaSet         deployment/kubernetes-dashboard                                  Scaled up replica set kubernetes-dashboard-7778f8b456 to 1\nkube-system                          5m26s       Warning   FailedScheduling          pod/l7-default-backend-678889f899-7nh6w                          no nodes available to schedule pods\nkube-system                          5m8s        Warning   FailedScheduling          pod/l7-default-backend-678889f899-7nh6w                          0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          5m5s        Normal    Scheduled                 pod/l7-default-backend-678889f899-7nh6w                          Successfully assigned kube-system/l7-default-backend-678889f899-7nh6w to bootstrap-e2e-minion-group-l1kf\nkube-system                          4m57s       Normal    Pulling                   pod/l7-default-backend-678889f899-7nh6w                          Pulling image \"k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0\"\nkube-system                          4m56s       Normal    Pulled                    pod/l7-default-backend-678889f899-7nh6w                          Successfully pulled image \"k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0\"\nkube-system                          4m56s       Normal    Created                   pod/l7-default-backend-678889f899-7nh6w                          Created container default-http-backend\nkube-system                          4m48s       Normal    Started                   pod/l7-default-backend-678889f899-7nh6w                          Started container default-http-backend\nkube-system                          5m31s       Warning   FailedCreate              replicaset/l7-default-backend-678889f899                         Error creating: pods \"l7-default-backend-678889f899-\" is forbidden: no providers available to validate pod request\nkube-system                          5m28s       Warning   FailedCreate              replicaset/l7-default-backend-678889f899                         Error creating: pods \"l7-default-backend-678889f899-\" is forbidden: unable to validate against any pod security policy: []\nkube-system                          5m26s       Normal    SuccessfulCreate          replicaset/l7-default-backend-678889f899                         Created pod: l7-default-backend-678889f899-7nh6w\nkube-system                          5m31s       Normal    ScalingReplicaSet         deployment/l7-default-backend                                    Scaled up replica set l7-default-backend-678889f899 to 1\nkube-system                          5m23s       Normal    Created                   pod/l7-lb-controller-bootstrap-e2e-master                        Created container l7-lb-controller\nkube-system                          5m21s       Normal    Started                   pod/l7-lb-controller-bootstrap-e2e-master                        Started container l7-lb-controller\nkube-system                          5m23s       Normal    Pulled                    pod/l7-lb-controller-bootstrap-e2e-master                        Container image \"k8s.gcr.io/ingress-gce-glbc-amd64:v1.6.1\" already present on machine\nkube-system                          5m16s       Normal    Scheduled                 pod/metadata-proxy-v0.1-2hrsk                                    Successfully assigned kube-system/metadata-proxy-v0.1-2hrsk to bootstrap-e2e-minion-group-mp1q\nkube-system                          5m15s       Warning   FailedMount               pod/metadata-proxy-v0.1-2hrsk                                    MountVolume.SetUp failed for volume \"metadata-proxy-token-6hblr\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          5m13s       Normal    Pulling                   pod/metadata-proxy-v0.1-2hrsk                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          5m11s       Normal    Pulled                    pod/metadata-proxy-v0.1-2hrsk                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          5m10s       Normal    Created                   pod/metadata-proxy-v0.1-2hrsk                                    Created container metadata-proxy\nkube-system                          5m9s        Normal    Started                   pod/metadata-proxy-v0.1-2hrsk                                    Started container metadata-proxy\nkube-system                          5m9s        Normal    Pulling                   pod/metadata-proxy-v0.1-2hrsk                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          5m8s        Normal    Pulled                    pod/metadata-proxy-v0.1-2hrsk                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          5m6s        Normal    Created                   pod/metadata-proxy-v0.1-2hrsk                                    Created container prometheus-to-sd-exporter\nkube-system                          5m5s        Normal    Started                   pod/metadata-proxy-v0.1-2hrsk                                    Started container prometheus-to-sd-exporter\nkube-system                          5m19s       Normal    Scheduled                 pod/metadata-proxy-v0.1-4hnjt                                    Successfully assigned kube-system/metadata-proxy-v0.1-4hnjt to bootstrap-e2e-master\nkube-system                          5m16s       Normal    Pulling                   pod/metadata-proxy-v0.1-4hnjt                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          5m16s       Normal    Pulled                    pod/metadata-proxy-v0.1-4hnjt                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          5m15s       Normal    Created                   pod/metadata-proxy-v0.1-4hnjt                                    Created container metadata-proxy\nkube-system                          5m15s       Normal    Started                   pod/metadata-proxy-v0.1-4hnjt                                    Started container metadata-proxy\nkube-system                          5m15s       Normal    Pulling                   pod/metadata-proxy-v0.1-4hnjt                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          5m13s       Normal    Pulled                    pod/metadata-proxy-v0.1-4hnjt                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          5m12s       Normal    Created                   pod/metadata-proxy-v0.1-4hnjt                                    Created container prometheus-to-sd-exporter\nkube-system                          5m10s       Normal    Started                   pod/metadata-proxy-v0.1-4hnjt                                    Started container prometheus-to-sd-exporter\nkube-system                          5m16s       Normal    Scheduled                 pod/metadata-proxy-v0.1-8ll7f                                    Successfully assigned kube-system/metadata-proxy-v0.1-8ll7f to bootstrap-e2e-minion-group-cksd\nkube-system                          5m15s       Normal    Pulling                   pod/metadata-proxy-v0.1-8ll7f                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          5m13s       Normal    Pulled                    pod/metadata-proxy-v0.1-8ll7f                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          5m12s       Normal    Created                   pod/metadata-proxy-v0.1-8ll7f                                    Created container metadata-proxy\nkube-system                          5m11s       Normal    Started                   pod/metadata-proxy-v0.1-8ll7f                                    Started container metadata-proxy\nkube-system                          5m11s       Normal    Pulling                   pod/metadata-proxy-v0.1-8ll7f                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          5m9s        Normal    Pulled                    pod/metadata-proxy-v0.1-8ll7f                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          5m8s        Normal    Created                   pod/metadata-proxy-v0.1-8ll7f                                    Created container prometheus-to-sd-exporter\nkube-system                          5m6s        Normal    Started                   pod/metadata-proxy-v0.1-8ll7f                                    Started container prometheus-to-sd-exporter\nkube-system                          5m16s       Normal    Scheduled                 pod/metadata-proxy-v0.1-dkm8f                                    Successfully assigned kube-system/metadata-proxy-v0.1-dkm8f to bootstrap-e2e-minion-group-l1kf\nkube-system                          5m15s       Normal    Pulling                   pod/metadata-proxy-v0.1-dkm8f                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          5m13s       Normal    Pulled                    pod/metadata-proxy-v0.1-dkm8f                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          5m12s       Normal    Created                   pod/metadata-proxy-v0.1-dkm8f                                    Created container metadata-proxy\nkube-system                          5m10s       Normal    Started                   pod/metadata-proxy-v0.1-dkm8f                                    Started container metadata-proxy\nkube-system                          5m10s       Normal    Pulling                   pod/metadata-proxy-v0.1-dkm8f                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          5m9s        Normal    Pulled                    pod/metadata-proxy-v0.1-dkm8f                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          5m6s        Normal    Created                   pod/metadata-proxy-v0.1-dkm8f                                    Created container prometheus-to-sd-exporter\nkube-system                          5m5s        Normal    Started                   pod/metadata-proxy-v0.1-dkm8f                                    Started container prometheus-to-sd-exporter\nkube-system                          5m18s       Normal    Scheduled                 pod/metadata-proxy-v0.1-ltzzx                                    Successfully assigned kube-system/metadata-proxy-v0.1-ltzzx to bootstrap-e2e-minion-group-hs9p\nkube-system                          5m17s       Warning   FailedMount               pod/metadata-proxy-v0.1-ltzzx                                    MountVolume.SetUp failed for volume \"metadata-proxy-token-6hblr\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          5m15s       Normal    Pulling                   pod/metadata-proxy-v0.1-ltzzx                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          5m13s       Normal    Pulled                    pod/metadata-proxy-v0.1-ltzzx                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          5m12s       Normal    Created                   pod/metadata-proxy-v0.1-ltzzx                                    Created container metadata-proxy\nkube-system                          5m11s       Normal    Started                   pod/metadata-proxy-v0.1-ltzzx                                    Started container metadata-proxy\nkube-system                          5m11s       Normal    Pulling                   pod/metadata-proxy-v0.1-ltzzx                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          5m10s       Normal    Pulled                    pod/metadata-proxy-v0.1-ltzzx                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          5m8s        Normal    Created                   pod/metadata-proxy-v0.1-ltzzx                                    Created container prometheus-to-sd-exporter\nkube-system                          5m6s        Normal    Started                   pod/metadata-proxy-v0.1-ltzzx                                    Started container prometheus-to-sd-exporter\nkube-system                          5m19s       Normal    SuccessfulCreate          daemonset/metadata-proxy-v0.1                                    Created pod: metadata-proxy-v0.1-4hnjt\nkube-system                          5m19s       Normal    SuccessfulCreate          daemonset/metadata-proxy-v0.1                                    Created pod: metadata-proxy-v0.1-ltzzx\nkube-system                          5m17s       Normal    SuccessfulCreate          daemonset/metadata-proxy-v0.1                                    Created pod: metadata-proxy-v0.1-8ll7f\nkube-system                          5m17s       Normal    SuccessfulCreate          daemonset/metadata-proxy-v0.1                                    Created pod: metadata-proxy-v0.1-dkm8f\nkube-system                          5m16s       Normal    SuccessfulCreate          daemonset/metadata-proxy-v0.1                                    Created pod: metadata-proxy-v0.1-2hrsk\nkube-system                          4m50s       Normal    Scheduled                 pod/metrics-server-v0.3.6-5f859c87d6-b9nsp                       Successfully assigned kube-system/metrics-server-v0.3.6-5f859c87d6-b9nsp to bootstrap-e2e-minion-group-mp1q\nkube-system                          4m49s       Normal    Pulling                   pod/metrics-server-v0.3.6-5f859c87d6-b9nsp                       Pulling image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          4m48s       Normal    Pulled                    pod/metrics-server-v0.3.6-5f859c87d6-b9nsp                       Successfully pulled image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          4m48s       Normal    Created                   pod/metrics-server-v0.3.6-5f859c87d6-b9nsp                       Created container metrics-server\nkube-system                          4m46s       Normal    Started                   pod/metrics-server-v0.3.6-5f859c87d6-b9nsp                       Started container metrics-server\nkube-system                          4m46s       Normal    Pulling                   pod/metrics-server-v0.3.6-5f859c87d6-b9nsp                       Pulling image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          4m45s       Normal    Pulled                    pod/metrics-server-v0.3.6-5f859c87d6-b9nsp                       Successfully pulled image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          4m45s       Normal    Created                   pod/metrics-server-v0.3.6-5f859c87d6-b9nsp                       Created container metrics-server-nanny\nkube-system                          4m44s       Normal    Started                   pod/metrics-server-v0.3.6-5f859c87d6-b9nsp                       Started container metrics-server-nanny\nkube-system                          4m50s       Normal    SuccessfulCreate          replicaset/metrics-server-v0.3.6-5f859c87d6                      Created pod: metrics-server-v0.3.6-5f859c87d6-b9nsp\nkube-system                          5m26s       Warning   FailedScheduling          pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        no nodes available to schedule pods\nkube-system                          5m19s       Warning   FailedScheduling          pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        0/2 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 1 node(s) were unschedulable.\nkube-system                          5m8s        Warning   FailedScheduling          pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          4m59s       Normal    Scheduled                 pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        Successfully assigned kube-system/metrics-server-v0.3.6-65d4dc878-2hsf7 to bootstrap-e2e-minion-group-l1kf\nkube-system                          4m58s       Normal    Pulling                   pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        Pulling image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          4m55s       Normal    Pulled                    pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        Successfully pulled image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          4m55s       Normal    Created                   pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        Created container metrics-server\nkube-system                          4m54s       Normal    Started                   pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        Started container metrics-server\nkube-system                          4m54s       Normal    Pulling                   pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        Pulling image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          4m51s       Normal    Pulled                    pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        Successfully pulled image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          4m51s       Normal    Created                   pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        Created container metrics-server-nanny\nkube-system                          4m51s       Normal    Started                   pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        Started container metrics-server-nanny\nkube-system                          4m42s       Normal    Killing                   pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        Stopping container metrics-server\nkube-system                          4m42s       Normal    Killing                   pod/metrics-server-v0.3.6-65d4dc878-2hsf7                        Stopping container metrics-server-nanny\nkube-system                          5m27s       Warning   FailedCreate              replicaset/metrics-server-v0.3.6-65d4dc878                       Error creating: pods \"metrics-server-v0.3.6-65d4dc878-\" is forbidden: unable to validate against any pod security policy: []\nkube-system                          5m26s       Normal    SuccessfulCreate          replicaset/metrics-server-v0.3.6-65d4dc878                       Created pod: metrics-server-v0.3.6-65d4dc878-2hsf7\nkube-system                          4m42s       Normal    SuccessfulDelete          replicaset/metrics-server-v0.3.6-65d4dc878                       Deleted pod: metrics-server-v0.3.6-65d4dc878-2hsf7\nkube-system                          5m28s       Normal    ScalingReplicaSet         deployment/metrics-server-v0.3.6                                 Scaled up replica set metrics-server-v0.3.6-65d4dc878 to 1\nkube-system                          4m50s       Normal    ScalingReplicaSet         deployment/metrics-server-v0.3.6                                 Scaled up replica set metrics-server-v0.3.6-5f859c87d6 to 1\nkube-system                          4m42s       Normal    ScalingReplicaSet         deployment/metrics-server-v0.3.6                                 Scaled down replica set metrics-server-v0.3.6-65d4dc878 to 0\nkube-system                          5m8s        Warning   FailedScheduling          pod/volume-snapshot-controller-0                                 0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          5m5s        Normal    Scheduled                 pod/volume-snapshot-controller-0                                 Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-cksd\nkube-system                          5m4s        Normal    Pulling                   pod/volume-snapshot-controller-0                                 Pulling image \"quay.io/k8scsi/snapshot-controller:v2.0.0-rc2\"\nkube-system                          5m          Normal    Pulled                    pod/volume-snapshot-controller-0                                 Successfully pulled image \"quay.io/k8scsi/snapshot-controller:v2.0.0-rc2\"\nkube-system                          5m          Normal    Created                   pod/volume-snapshot-controller-0                                 Created container volume-snapshot-controller\nkube-system                          4m59s       Normal    Started                   pod/volume-snapshot-controller-0                                 Started container volume-snapshot-controller\nkube-system                          5m16s       Normal    SuccessfulCreate          statefulset/volume-snapshot-controller                           create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful\nkubectl-2531                         1s          Normal    Scheduled                 pod/deployment4mt9p7dghkt-87fd78899-hnddv                        Successfully assigned kubectl-2531/deployment4mt9p7dghkt-87fd78899-hnddv to bootstrap-e2e-minion-group-l1kf\nkubectl-2531                         2s          Normal    SuccessfulCreate          replicaset/deployment4mt9p7dghkt-87fd78899                       Created pod: deployment4mt9p7dghkt-87fd78899-hnddv\nkubectl-2531                         2s          Normal    ScalingReplicaSet         deployment/deployment4mt9p7dghkt                                 Scaled up replica set deployment4mt9p7dghkt-87fd78899 to 1\nkubectl-2531                         4s          Normal    Scheduled                 pod/ds6mt9p7dghkt-6nzf6                                          Successfully assigned kubectl-2531/ds6mt9p7dghkt-6nzf6 to bootstrap-e2e-minion-group-cksd\nkubectl-2531                         4s          Normal    Scheduled                 pod/ds6mt9p7dghkt-g5wgq                                          Successfully assigned kubectl-2531/ds6mt9p7dghkt-g5wgq to bootstrap-e2e-minion-group-l1kf\nkubectl-2531                         3s          Normal    Pulling                   pod/ds6mt9p7dghkt-g5wgq                                          Pulling image \"fedora:latest\"\nkubectl-2531                         4s          Normal    Scheduled                 pod/ds6mt9p7dghkt-jpff6                                          Successfully assigned kubectl-2531/ds6mt9p7dghkt-jpff6 to bootstrap-e2e-minion-group-mp1q\nkubectl-2531                         4s          Normal    Scheduled                 pod/ds6mt9p7dghkt-rdfmf                                          Successfully assigned kubectl-2531/ds6mt9p7dghkt-rdfmf to bootstrap-e2e-minion-group-hs9p\nkubectl-2531                         3s          Warning   FailedMount               pod/ds6mt9p7dghkt-rdfmf                                          MountVolume.SetUp failed for volume \"default-token-nc2gc\" : failed to sync secret cache: timed out waiting for the condition\nkubectl-2531                         1s          Warning   FailedCreatePodSandBox    pod/ds6mt9p7dghkt-rdfmf                                          Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod \"ds6mt9p7dghkt-rdfmf\": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused \"process_linux.go:430: container init caused \\\"process_linux.go:419: writing syncT 'resume' caused \\\\\\\"write init-p: broken pipe\\\\\\\"\\\"\": unknown\nkubectl-2531                         4s          Normal    SuccessfulCreate          daemonset/ds6mt9p7dghkt                                          Created pod: ds6mt9p7dghkt-rdfmf\nkubectl-2531                         4s          Normal    SuccessfulCreate          daemonset/ds6mt9p7dghkt                                          Created pod: ds6mt9p7dghkt-6nzf6\nkubectl-2531                         4s          Normal    SuccessfulCreate          daemonset/ds6mt9p7dghkt                                          Created pod: ds6mt9p7dghkt-g5wgq\nkubectl-2531                         4s          Normal    SuccessfulCreate          daemonset/ds6mt9p7dghkt                                          Created pod: ds6mt9p7dghkt-jpff6\nkubectl-2531                         <unknown>             Laziness                                                                                   some data here\nkubectl-2531                         10s         Normal    ADD                       ingress/ingress1mt9p7dghkt                                       kubectl-2531/ingress1mt9p7dghkt\nkubectl-2531                         8s          Warning   Translate                 ingress/ingress1mt9p7dghkt                                       error while evaluating the ingress spec: could not find service \"kubectl-2531/service\"\nkubectl-2531                         23s         Warning   FailedScheduling          pod/pod1mt9p7dghkt                                               0/5 nodes are available: 1 node(s) were unschedulable, 4 Insufficient cpu.\nkubectl-2531                         23s         Warning   FailedScheduling          pod/pod1mt9p7dghkt                                               skip schedule deleting pod: kubectl-2531/pod1mt9p7dghkt\nkubectl-2531                         24s         Warning   ProvisioningFailed        persistentvolumeclaim/pvc1mt9p7dghkt                             Failed to provision volume with StorageClass \"standard\": claim.Spec.Selector is not supported for dynamic provisioning on GCE\nkubectl-2531                         19s         Normal    Scheduled                 pod/rc1mt9p7dghkt-btpwd                                          Successfully assigned kubectl-2531/rc1mt9p7dghkt-btpwd to bootstrap-e2e-minion-group-cksd\nkubectl-2531                         18s         Warning   FailedMount               pod/rc1mt9p7dghkt-btpwd                                          MountVolume.SetUp failed for volume \"default-token-nc2gc\" : failed to sync secret cache: timed out waiting for the condition\nkubectl-2531                         14s         Normal    Pulling                   pod/rc1mt9p7dghkt-btpwd                                          Pulling image \"fedora:latest\"\nkubectl-2531                         19s         Normal    SuccessfulCreate          replicationcontroller/rc1mt9p7dghkt                              Created pod: rc1mt9p7dghkt-btpwd\nkubectl-2531                         7s          Normal    Scheduled                 pod/rs3mt9p7dghkt-5z4vl                                          Successfully assigned kubectl-2531/rs3mt9p7dghkt-5z4vl to bootstrap-e2e-minion-group-cksd\nkubectl-2531                         4s          Warning   FailedCreatePodSandBox    pod/rs3mt9p7dghkt-5z4vl                                          Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod \"rs3mt9p7dghkt-5z4vl\": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused \"process_linux.go:430: container init caused \\\"process_linux.go:413: running prestart hook 0 caused \\\\\\\"error running hook: exit status 1, stdout: , stderr: time=\\\\\\\\\\\\\\\"2020-01-17T13:34:55Z\\\\\\\\\\\\\\\" level=fatal msg=\\\\\\\\\\\\\\\"no such file or directory\\\\\\\\\\\\\\\"\\\\\\\\n\\\\\\\"\\\"\": unknown\nkubectl-2531                         8s          Normal    SuccessfulCreate          replicaset/rs3mt9p7dghkt                                         Created pod: rs3mt9p7dghkt-5z4vl\nkubectl-2531                         6s          Warning   FailedCreate              statefulset/ss3mt9p7dghkt                                        create Pod ss3mt9p7dghkt-0 in StatefulSet ss3mt9p7dghkt failed error: Pod \"ss3mt9p7dghkt-0\" is invalid: spec.containers: Required value\nkubectl-8951                         24s         Normal    Scheduled                 pod/pause                                                        Successfully assigned kubectl-8951/pause to bootstrap-e2e-minion-group-cksd\nkubectl-8951                         22s         Normal    Pulled                    pod/pause                                                        Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubectl-8951                         21s         Normal    Created                   pod/pause                                                        Created container pause\nkubectl-8951                         21s         Normal    Started                   pod/pause                                                        Started container pause\nkubectl-8951                         15s         Normal    Killing                   pod/pause                                                        Stopping container pause\npersistent-local-volumes-test-1202   11s         Normal    Pulled                    pod/hostexec-bootstrap-e2e-minion-group-cksd-jdq2b               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-1202   11s         Normal    Created                   pod/hostexec-bootstrap-e2e-minion-group-cksd-jdq2b               Created container agnhost\npersistent-local-volumes-test-1202   10s         Normal    Started                   pod/hostexec-bootstrap-e2e-minion-group-cksd-jdq2b               Started container agnhost\nport-forwarding-6964                 42s         Normal    Scheduled                 pod/pfpod                                                        Successfully assigned port-forwarding-6964/pfpod to bootstrap-e2e-minion-group-mp1q\nport-forwarding-6964                 41s         Warning   FailedMount               pod/pfpod                                                        MountVolume.SetUp failed for volume \"default-token-vhmgg\" : failed to sync secret cache: timed out waiting for the condition\nport-forwarding-6964                 39s         Normal    Pulled                    pod/pfpod                                                        Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nport-forwarding-6964                 39s         Normal    Created                   pod/pfpod                                                        Created container readiness\nport-forwarding-6964                 39s         Normal    Started                   pod/pfpod                                                        Started container readiness\nport-forwarding-6964                 39s         Normal    Pulled                    pod/pfpod                                                        Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nport-forwarding-6964                 39s         Normal    Created                   pod/pfpod                                                        Created container portforwardtester\nport-forwarding-6964                 39s         Normal    Started                   pod/pfpod                                                        Started container portforwardtester\nport-forwarding-6964                 8s          Warning   Unhealthy                 pod/pfpod                                                        Readiness probe failed:\nprojected-24                         44s         Normal    Scheduled                 pod/labelsupdatea40418f8-51bb-4c67-83ed-6facca8fefcf             Successfully assigned projected-24/labelsupdatea40418f8-51bb-4c67-83ed-6facca8fefcf to bootstrap-e2e-minion-group-hs9p\nprojected-24                         40s         Normal    Pulled                    pod/labelsupdatea40418f8-51bb-4c67-83ed-6facca8fefcf             Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprojected-24                         39s         Normal    Created                   pod/labelsupdatea40418f8-51bb-4c67-83ed-6facca8fefcf             Created container client-container\nprojected-24                         38s         Normal    Started                   pod/labelsupdatea40418f8-51bb-4c67-83ed-6facca8fefcf             Started container client-container\nprojected-2680                       21s         Normal    Scheduled                 pod/pod-projected-secrets-124d1cd1-6401-4a08-a7fb-31d27d48a2cd   Successfully assigned projected-2680/pod-projected-secrets-124d1cd1-6401-4a08-a7fb-31d27d48a2cd to bootstrap-e2e-minion-group-hs9p\nprojected-2680                       20s         Warning   FailedMount               pod/pod-projected-secrets-124d1cd1-6401-4a08-a7fb-31d27d48a2cd   MountVolume.SetUp failed for volume \"projected-secret-volume\" : failed to sync secret cache: timed out waiting for the condition\nprojected-2680                       20s         Warning   FailedMount               pod/pod-projected-secrets-124d1cd1-6401-4a08-a7fb-31d27d48a2cd   MountVolume.SetUp failed for volume \"default-token-67dn2\" : failed to sync secret cache: timed out waiting for the condition\nprojected-2680                       18s         Normal    Pulled                    pod/pod-projected-secrets-124d1cd1-6401-4a08-a7fb-31d27d48a2cd   Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprojected-2680                       18s         Normal    Created                   pod/pod-projected-secrets-124d1cd1-6401-4a08-a7fb-31d27d48a2cd   Created container projected-secret-volume-test\nprojected-2680                       17s         Normal    Started                   pod/pod-projected-secrets-124d1cd1-6401-4a08-a7fb-31d27d48a2cd   Started container projected-secret-volume-test\nprovisioning-1561                    82s         Normal    Pulled                    pod/hostexec-bootstrap-e2e-minion-group-cksd-l4ph7               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-1561                    82s         Normal    Created                   pod/hostexec-bootstrap-e2e-minion-group-cksd-l4ph7               Created container agnhost\nprovisioning-1561                    81s         Normal    Started                   pod/hostexec-bootstrap-e2e-minion-group-cksd-l4ph7               Started container agnhost\nprovisioning-1561                    31s         Normal    Killing                   pod/hostexec-bootstrap-e2e-minion-group-cksd-l4ph7               Stopping container agnhost\nprovisioning-1561                    60s         Warning   FailedMount               pod/pod-subpath-test-preprovisionedpv-bzfc                       Unable to attach or mount volumes: unmounted volumes=[test-volume liveness-probe-volume default-token-d6mdx], unattached volumes=[test-volume liveness-probe-volume default-token-d6mdx]: error processing PVC provisioning-1561/pvc-5h5q7: failed to fetch PVC from API server: persistentvolumeclaims \"pvc-5h5q7\" is forbidden: User \"system:node:bootstrap-e2e-minion-group-cksd\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"provisioning-1561\": no relationship found between node \"bootstrap-e2e-minion-group-cksd\" and this object\nprovisioning-1561                    46s         Normal    Pulled                    pod/pod-subpath-test-preprovisionedpv-bzfc                       Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-1561                    45s         Normal    Created                   pod/pod-subpath-test-preprovisionedpv-bzfc                       Created container init-volume-preprovisionedpv-bzfc\nprovisioning-1561                    45s         Normal    Started                   pod/pod-subpath-test-preprovisionedpv-bzfc                       Started container init-volume-preprovisionedpv-bzfc\nprovisioning-1561                    44s         Normal    Pulled                    pod/pod-subpath-test-preprovisionedpv-bzfc                       Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-1561                    44s         Normal    Created                   pod/pod-subpath-test-preprovisionedpv-bzfc                       Created container test-init-volume-preprovisionedpv-bzfc\nprovisioning-1561                    43s         Normal    Started                   pod/pod-subpath-test-preprovisionedpv-bzfc                       Started container test-init-volume-preprovisionedpv-bzfc\nprovisioning-1561                    43s         Normal    Pulled                    pod/pod-subpath-test-preprovisionedpv-bzfc                       Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-1561                    42s         Normal    Created                   pod/pod-subpath-test-preprovisionedpv-bzfc                       Created container test-container-subpath-preprovisionedpv-bzfc\nprovisioning-1561                    42s         Normal    Started                   pod/pod-subpath-test-preprovisionedpv-bzfc                       Started container test-container-subpath-preprovisionedpv-bzfc\nprovisioning-1561                    70s         Warning   ProvisioningFailed        persistentvolumeclaim/pvc-5h5q7                                  storageclass.storage.k8s.io \"provisioning-1561\" not found\nprovisioning-2688                    4s          Normal    LeaderElection            endpoints/example.com-nfs-provisioning-2688                      external-provisioner-cn765_325f5b19-6bff-41db-8603-66811253ce90 became leader\nprovisioning-2688                    56s         Normal    Scheduled                 pod/external-provisioner-cn765                                   Successfully assigned provisioning-2688/external-provisioner-cn765 to bootstrap-e2e-minion-group-hs9p\nprovisioning-2688                    53s         Normal    Pulling                   pod/external-provisioner-cn765                                   Pulling image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\"\nprovisioning-2688                    11s         Normal    Pulled                    pod/external-provisioner-cn765                                   Successfully pulled image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\"\nprovisioning-2688                    11s         Normal    Created                   pod/external-provisioner-cn765                                   Created container nfs-provisioner\nprovisioning-2688                    10s         Normal    Started                   pod/external-provisioner-cn765                                   Started container nfs-provisioner\nprovisioning-2872                    25s         Normal    Pulled                    pod/pod-subpath-test-inlinevolume-hp2b                           Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-2872                    24s         Normal    Created                   pod/pod-subpath-test-inlinevolume-hp2b                           Created container init-volume-inlinevolume-hp2b\nprovisioning-2872                    23s         Normal    Started                   pod/pod-subpath-test-inlinevolume-hp2b                           Started container init-volume-inlinevolume-hp2b\nprovisioning-2872                    22s         Normal    Pulled                    pod/pod-subpath-test-inlinevolume-hp2b                           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-2872                    21s         Normal    Created                   pod/pod-subpath-test-inlinevolume-hp2b                           Created container test-init-subpath-inlinevolume-hp2b\nprovisioning-2872                    21s         Normal    Started                   pod/pod-subpath-test-inlinevolume-hp2b                           Started container test-init-subpath-inlinevolume-hp2b\nprovisioning-2872                    21s         Normal    Pulled                    pod/pod-subpath-test-inlinevolume-hp2b                           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-2872                    21s         Normal    Created                   pod/pod-subpath-test-inlinevolume-hp2b                           Created container test-container-subpath-inlinevolume-hp2b\nprovisioning-2872                    20s         Normal    Started                   pod/pod-subpath-test-inlinevolume-hp2b                           Started container test-container-subpath-inlinevolume-hp2b\nprovisioning-2872                    20s         Normal    Pulled                    pod/pod-subpath-test-inlinevolume-hp2b                           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-2872                    20s         Normal    Created                   pod/pod-subpath-test-inlinevolume-hp2b                           Created container test-container-volume-inlinevolume-hp2b\nprovisioning-2872                    20s         Normal    Started                   pod/pod-subpath-test-inlinevolume-hp2b                           Started container test-container-volume-inlinevolume-hp2b\nprovisioning-3858                    12s         Normal    Pulled                    pod/hostexec-bootstrap-e2e-minion-group-l1kf-2gbrh               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-3858                    12s         Normal    Created                   pod/hostexec-bootstrap-e2e-minion-group-l1kf-2gbrh               Created container agnhost\nprovisioning-3858                    11s         Normal    Started                   pod/hostexec-bootstrap-e2e-minion-group-l1kf-2gbrh               Started container agnhost\nprovisioning-4445                    22s         Normal    Pulled                    pod/hostexec-bootstrap-e2e-minion-group-cksd-4tt2w               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-4445                    22s         Normal    Created                   pod/hostexec-bootstrap-e2e-minion-group-cksd-4tt2w               Created container agnhost\nprovisioning-4445                    21s         Normal    Started                   pod/hostexec-bootstrap-e2e-minion-group-cksd-4tt2w               Started container agnhost\nprovisioning-4445                    15s         Warning   ProvisioningFailed        persistentvolumeclaim/pvc-smzzh                                  storageclass.storage.k8s.io \"provisioning-4445\" not found\nprovisioning-4887                    58s         Warning   FailedMount               pod/hostpath-symlink-prep-provisioning-4887                      MountVolume.SetUp failed for volume \"default-token-6hlk7\" : failed to sync secret cache: timed out waiting for the condition\nprovisioning-4887                    55s         Normal    Pulled                    pod/hostpath-symlink-prep-provisioning-4887                      Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-4887                    55s         Normal    Created                   pod/hostpath-symlink-prep-provisioning-4887                      Created container init-volume-provisioning-4887\nprovisioning-4887                    53s         Normal    Started                   pod/hostpath-symlink-prep-provisioning-4887                      Started container init-volume-provisioning-4887\nprovisioning-4887                    29s         Warning   FailedMount               pod/hostpath-symlink-prep-provisioning-4887                      MountVolume.SetUp failed for volume \"default-token-6hlk7\" : failed to sync secret cache: timed out waiting for the condition\nprovisioning-4887                    27s         Normal    Pulled                    pod/hostpath-symlink-prep-provisioning-4887                      Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-4887                    26s         Normal    Created                   pod/hostpath-symlink-prep-provisioning-4887                      Created container init-volume-provisioning-4887\nprovisioning-4887                    26s         Normal    Started                   pod/hostpath-symlink-prep-provisioning-4887                      Started container init-volume-provisioning-4887\nprovisioning-4887                    46s         Normal    Pulled                    pod/pod-subpath-test-inlinevolume-drt4                           Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-4887                    46s         Normal    Created                   pod/pod-subpath-test-inlinevolume-drt4                           Created container init-volume-inlinevolume-drt4\nprovisioning-4887                    45s         Normal    Started                   pod/pod-subpath-test-inlinevolume-drt4                           Started container init-volume-inlinevolume-drt4\nprovisioning-4887                    44s         Normal    Pulled                    pod/pod-subpath-test-inlinevolume-drt4                           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4887                    44s         Normal    Created                   pod/pod-subpath-test-inlinevolume-drt4                           Created container test-init-volume-inlinevolume-drt4\nprovisioning-4887                    41s         Normal    Started                   pod/pod-subpath-test-inlinevolume-drt4                           Started container test-init-volume-inlinevolume-drt4\nprovisioning-4887                    38s         Normal    Pulled                    pod/pod-subpath-test-inlinevolume-drt4                           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4887                    38s         Normal    Created                   pod/pod-subpath-test-inlinevolume-drt4                           Created container test-container-subpath-inlinevolume-drt4\nprovisioning-4887                    37s         Normal    Started                   pod/pod-subpath-test-inlinevolume-drt4                           Started container test-container-subpath-inlinevolume-drt4\nprovisioning-6230                    50s         Normal    LeaderElection            endpoints/example.com-nfs-provisioning-6230                      external-provisioner-gmfzp_9fc2b5be-a821-46bc-9039-2d9a03a0e5d8 became leader\nprovisioning-6230                    74s         Normal    Scheduled                 pod/external-provisioner-gmfzp                                   Successfully assigned provisioning-6230/external-provisioner-gmfzp to bootstrap-e2e-minion-group-mp1q\nprovisioning-6230                    73s         Warning   FailedMount               pod/external-provisioner-gmfzp                                   MountVolume.SetUp failed for volume \"default-token-nkkmc\" : failed to sync secret cache: timed out waiting for the condition\nprovisioning-6230                    72s         Normal    Pulling                   pod/external-provisioner-gmfzp                                   Pulling image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\"\nprovisioning-6230                    57s         Normal    Pulled                    pod/external-provisioner-gmfzp                                   Successfully pulled image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\"\nprovisioning-6230                    56s         Normal    Created                   pod/external-provisioner-gmfzp                                   Created container nfs-provisioner\nprovisioning-6230                    56s         Normal    Started                   pod/external-provisioner-gmfzp                                   Started container nfs-provisioner\nprovisioning-6230                    13s         Normal    Killing                   pod/external-provisioner-gmfzp                                   Stopping container nfs-provisioner\nprovisioning-6230                    50s         Normal    Provisioning              persistentvolumeclaim/nfsnwtgh                                   External provisioner is provisioning volume for claim \"provisioning-6230/nfsnwtgh\"\nprovisioning-6230                    50s         Normal    ExternalProvisioning      persistentvolumeclaim/nfsnwtgh                                   waiting for a volume to be created, either by external provisioner \"example.com/nfs-provisioning-6230\" or manually created by system administrator\nprovisioning-6230                    49s         Normal    ProvisioningSucceeded     persistentvolumeclaim/nfsnwtgh                                   Successfully provisioned volume pvc-0f2b209f-0879-46de-8188-068aaf8bdd4d\nprovisioning-6230                    47s         Normal    Scheduled                 pod/pod-subpath-test-dynamicpv-n6w9                              Successfully assigned provisioning-6230/pod-subpath-test-dynamicpv-n6w9 to bootstrap-e2e-minion-group-hs9p\nprovisioning-6230                    41s         Normal    Pulled                    pod/pod-subpath-test-dynamicpv-n6w9                              Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-6230                    41s         Normal    Created                   pod/pod-subpath-test-dynamicpv-n6w9                              Created container init-volume-dynamicpv-n6w9\nprovisioning-6230                    40s         Normal    Started                   pod/pod-subpath-test-dynamicpv-n6w9                              Started container init-volume-dynamicpv-n6w9\nprovisioning-6230                    40s         Normal    Pulled                    pod/pod-subpath-test-dynamicpv-n6w9                              Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-6230                    39s         Normal    Created                   pod/pod-subpath-test-dynamicpv-n6w9                              Created container test-init-subpath-dynamicpv-n6w9\nprovisioning-6230                    38s         Normal    Started                   pod/pod-subpath-test-dynamicpv-n6w9                              Started container test-init-subpath-dynamicpv-n6w9\nprovisioning-6230                    37s         Normal    Pulled                    pod/pod-subpath-test-dynamicpv-n6w9                              Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-6230                    36s         Normal    Created                   pod/pod-subpath-test-dynamicpv-n6w9                              Created container test-container-subpath-dynamicpv-n6w9\nprovisioning-6230                    35s         Normal    Started                   pod/pod-subpath-test-dynamicpv-n6w9                              Started container test-container-subpath-dynamicpv-n6w9\nprovisioning-6230                    30s         Normal    Scheduled                 pod/pod-subpath-test-dynamicpv-n6w9                              Successfully assigned provisioning-6230/pod-subpath-test-dynamicpv-n6w9 to bootstrap-e2e-minion-group-cksd\nprovisioning-6230                    24s         Normal    Pulled                    pod/pod-subpath-test-dynamicpv-n6w9                              Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-6230                    24s         Normal    Created                   pod/pod-subpath-test-dynamicpv-n6w9                              Created container test-container-subpath-dynamicpv-n6w9\nprovisioning-6230                    23s         Normal    Started                   pod/pod-subpath-test-dynamicpv-n6w9                              Started container test-container-subpath-dynamicpv-n6w9\nprovisioning-8481                    70s         Normal    Scheduled                 pod/pod-subpath-test-inlinevolume-nzw8                           Successfully assigned provisioning-8481/pod-subpath-test-inlinevolume-nzw8 to bootstrap-e2e-minion-group-hs9p\nprovisioning-8481                    68s         Normal    Pulled                    pod/pod-subpath-test-inlinevolume-nzw8                           Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-8481                    68s         Normal    Created                   pod/pod-subpath-test-inlinevolume-nzw8                           Created container init-volume-inlinevolume-nzw8\nprovisioning-8481                    67s         Normal    Started                   pod/pod-subpath-test-inlinevolume-nzw8                           Started container init-volume-inlinevolume-nzw8\nprovisioning-8481                    66s         Normal    Pulled                    pod/pod-subpath-test-inlinevolume-nzw8                           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-8481                    65s         Normal    Created                   pod/pod-subpath-test-inlinevolume-nzw8                           Created container test-container-subpath-inlinevolume-nzw8\nprovisioning-8481                    65s         Normal    Started                   pod/pod-subpath-test-inlinevolume-nzw8                           Started container test-container-subpath-inlinevolume-nzw8\nprovisioning-8537                    8s          Normal    LeaderElection            endpoints/example.com-nfs-provisioning-8537                      external-provisioner-5jdkq_d0667193-244b-4379-a240-4b4b7da83bda became leader\nprovisioning-8537                    59s         Normal    Scheduled                 pod/external-provisioner-5jdkq                                   Successfully assigned provisioning-8537/external-provisioner-5jdkq to bootstrap-e2e-minion-group-l1kf\nprovisioning-8537                    56s         Normal    Pulling                   pod/external-provisioner-5jdkq                                   Pulling image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\"\nprovisioning-8537                    19s         Normal    Pulled                    pod/external-provisioner-5jdkq                                   Successfully pulled image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\"\nprovisioning-8537                    18s         Normal    Created                   pod/external-provisioner-5jdkq                                   Created container nfs-provisioner\nprovisioning-8537                    17s         Normal    Started                   pod/external-provisioner-5jdkq                                   Started container nfs-provisioner\nprovisioning-8537                    5s          Normal    ExternalProvisioning      persistentvolumeclaim/nfsxh69f                                   waiting for a volume to be created, either by external provisioner \"example.com/nfs-provisioning-8537\" or manually created by system administrator\nprovisioning-8537                    6s          Normal    Provisioning              persistentvolumeclaim/nfsxh69f                                   External provisioner is provisioning volume for claim \"provisioning-8537/nfsxh69f\"\nprovisioning-8537                    5s          Normal    ProvisioningSucceeded     persistentvolumeclaim/nfsxh69f                                   Successfully provisioned volume pvc-ebf811a6-506e-4dfc-a869-ab893049e2eb\nprovisioning-8537                    1s          Normal    Scheduled                 pod/pod-subpath-test-dynamicpv-wn5q                              Successfully assigned provisioning-8537/pod-subpath-test-dynamicpv-wn5q to bootstrap-e2e-minion-group-hs9p\nprovisioning-8819                    5s          Warning   FailedCreate              statefulset/csi-hostpath-attacher                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-8819                    4s          Warning   FailedCreate              statefulset/csi-hostpath-provisioner                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-8819                    1s          Warning   FailedMount               pod/csi-hostpath-resizer-0                                       MountVolume.SetUp failed for volume \"csi-resizer-token-xmpzk\" : failed to sync secret cache: timed out waiting for the condition\nprovisioning-8819                    5s          Normal    SuccessfulCreate          statefulset/csi-hostpathplugin                                   create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nprovisioning-9355                    49s         Normal    Pulled                    pod/hostexec-bootstrap-e2e-minion-group-mp1q-zd6q2               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-9355                    49s         Normal    Created                   pod/hostexec-bootstrap-e2e-minion-group-mp1q-zd6q2               Created container agnhost\nprovisioning-9355                    48s         Normal    Started                   pod/hostexec-bootstrap-e2e-minion-group-mp1q-zd6q2               Started container agnhost\nprovisioning-9355                    13s         Normal    Killing                   pod/hostexec-bootstrap-e2e-minion-group-mp1q-zd6q2               Stopping container agnhost\nprovisioning-9355                    25s         Normal    Pulling                   pod/pod-subpath-test-preprovisionedpv-zlzp                       Pulling image \"docker.io/library/busybox:1.29\"\nprovisioning-9355                    24s         Normal    Pulled                    pod/pod-subpath-test-preprovisionedpv-zlzp                       Successfully pulled image \"docker.io/library/busybox:1.29\"\nprovisioning-9355                    24s         Normal    Created                   pod/pod-subpath-test-preprovisionedpv-zlzp                       Created container init-volume-preprovisionedpv-zlzp\nprovisioning-9355                    23s         Normal    Started                   pod/pod-subpath-test-preprovisionedpv-zlzp                       Started container init-volume-preprovisionedpv-zlzp\nprovisioning-9355                    22s         Normal    Pulled                    pod/pod-subpath-test-preprovisionedpv-zlzp                       Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-9355                    22s         Normal    Created                   pod/pod-subpath-test-preprovisionedpv-zlzp                       Created container test-init-subpath-preprovisionedpv-zlzp\nprovisioning-9355                    22s         Normal    Started                   pod/pod-subpath-test-preprovisionedpv-zlzp                       Started container test-init-subpath-preprovisionedpv-zlzp\nprovisioning-9355                    21s         Normal    Pulled                    pod/pod-subpath-test-preprovisionedpv-zlzp                       Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-9355                    21s         Normal    Created                   pod/pod-subpath-test-preprovisionedpv-zlzp                       Created container test-container-subpath-preprovisionedpv-zlzp\nprovisioning-9355                    21s         Normal    Started                   pod/pod-subpath-test-preprovisionedpv-zlzp                       Started container test-container-subpath-preprovisionedpv-zlzp\nprovisioning-9355                    21s         Normal    Pulled                    pod/pod-subpath-test-preprovisionedpv-zlzp                       Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-9355                    20s         Normal    Created                   pod/pod-subpath-test-preprovisionedpv-zlzp                       Created container test-container-volume-preprovisionedpv-zlzp\nprovisioning-9355                    20s         Normal    Started                   pod/pod-subpath-test-preprovisionedpv-zlzp                       Started container test-container-volume-preprovisionedpv-zlzp\nprovisioning-9355                    44s         Warning   ProvisioningFailed        persistentvolumeclaim/pvc-gxg74                                  storageclass.storage.k8s.io \"provisioning-9355\" not found\npv-2914                              68s         Normal    Scheduled                 pod/nfs-server                                                   Successfully assigned pv-2914/nfs-server to bootstrap-e2e-minion-group-hs9p\npv-2914                              66s         Normal    Pulling                   pod/nfs-server                                                   Pulling image \"gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0\"\npv-2914                              34s         Normal    Pulled                    pod/nfs-server                                                   Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0\"\npv-2914                              34s         Normal    Created                   pod/nfs-server                                                   Created container nfs-server\npv-2914                              34s         Normal    Started                   pod/nfs-server                                                   Started container nfs-server\npv-2914                              13s         Normal    Killing                   pod/nfs-server                                                   Stopping container nfs-server\npv-2914                              24s         Normal    Scheduled                 pod/pvc-tester-csqd9                                             Successfully assigned pv-2914/pvc-tester-csqd9 to bootstrap-e2e-minion-group-hs9p\npv-2914                              21s         Normal    Pulled                    pod/pvc-tester-csqd9                                             Container image \"docker.io/library/busybox:1.29\" already present on machine\npv-2914                              21s         Normal    Created                   pod/pvc-tester-csqd9                                             Created container write-pod\npv-2914                              20s         Normal    Started                   pod/pvc-tester-csqd9                                             Started container write-pod\npv-2914                              17s         Normal    SandboxChanged            pod/pvc-tester-csqd9                                             Pod sandbox changed, it will be killed and re-created.\npv-2914                              14s         Warning   FailedCreatePodSandBox    pod/pvc-tester-csqd9                                             Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod \"pvc-tester-csqd9\": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused \"process_linux.go:430: container init caused \\\"read init-p: connection reset by peer\\\"\": unknown\nsecurity-context-7437                25s         Normal    Scheduled                 pod/security-context-bcbfffc0-d04f-4c25-b238-368ee6ebdda3        Successfully assigned security-context-7437/security-context-bcbfffc0-d04f-4c25-b238-368ee6ebdda3 to bootstrap-e2e-minion-group-l1kf\nsecurity-context-7437                20s         Normal    Pulled                    pod/security-context-bcbfffc0-d04f-4c25-b238-368ee6ebdda3        Container image \"docker.io/library/busybox:1.29\" already present on machine\nsecurity-context-7437                20s         Normal    Created                   pod/security-context-bcbfffc0-d04f-4c25-b238-368ee6ebdda3        Created container test-container\nsecurity-context-7437                19s         Normal    Started                   pod/security-context-bcbfffc0-d04f-4c25-b238-368ee6ebdda3        Started container test-container\nsecurity-context-test-6061           24s         Normal    Scheduled                 pod/alpine-nnp-nil-bd64228b-dfd2-4038-9364-7a42f2202c4f          Successfully assigned security-context-test-6061/alpine-nnp-nil-bd64228b-dfd2-4038-9364-7a42f2202c4f to bootstrap-e2e-minion-group-hs9p\nsecurity-context-test-6061           22s         Normal    Pulling                   pod/alpine-nnp-nil-bd64228b-dfd2-4038-9364-7a42f2202c4f          Pulling image \"gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0\"\nsecurity-context-test-6061           19s         Normal    Pulled                    pod/alpine-nnp-nil-bd64228b-dfd2-4038-9364-7a42f2202c4f          Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0\"\nsecurity-context-test-6061           18s         Normal    Created                   pod/alpine-nnp-nil-bd64228b-dfd2-4038-9364-7a42f2202c4f          Created container alpine-nnp-nil-bd64228b-dfd2-4038-9364-7a42f2202c4f\nsecurity-context-test-6061           17s         Normal    Started                   pod/alpine-nnp-nil-bd64228b-dfd2-4038-9364-7a42f2202c4f          Started container alpine-nnp-nil-bd64228b-dfd2-4038-9364-7a42f2202c4f\nservices-135                         35s         Normal    Scheduled                 pod/hostexec                                                     Successfully assigned services-135/hostexec to bootstrap-e2e-minion-group-mp1q\nservices-135                         34s         Normal    Pulled                    pod/hostexec                                                     Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-135                         34s         Normal    Created                   pod/hostexec                                                     Created container agnhost\nservices-135                         33s         Normal    Started                   pod/hostexec                                                     Started container agnhost\nservices-5413                        47s         Normal    Scheduled                 pod/execpod24s8x                                                 Successfully assigned services-5413/execpod24s8x to bootstrap-e2e-minion-group-hs9p\nservices-5413                        44s         Normal    Pulled                    pod/execpod24s8x                                                 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-5413                        44s         Normal    Created                   pod/execpod24s8x                                                 Created container agnhost-pause\nservices-5413                        42s         Normal    Started                   pod/execpod24s8x                                                 Started container agnhost-pause\nservices-5413                        55s         Normal    Scheduled                 pod/externalname-service-5f6kw                                   Successfully assigned services-5413/externalname-service-5f6kw to bootstrap-e2e-minion-group-cksd\nservices-5413                        53s         Normal    Pulled                    pod/externalname-service-5f6kw                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-5413                        53s         Normal    Created                   pod/externalname-service-5f6kw                                   Created container externalname-service\nservices-5413                        52s         Normal    Started                   pod/externalname-service-5f6kw                                   Started container externalname-service\nservices-5413                        56s         Normal    Scheduled                 pod/externalname-service-zpns9                                   Successfully assigned services-5413/externalname-service-zpns9 to bootstrap-e2e-minion-group-l1kf\nservices-5413                        52s         Normal    Pulled                    pod/externalname-service-zpns9                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-5413                        52s         Normal    Created                   pod/externalname-service-zpns9                                   Created container externalname-service\nservices-5413                        51s         Normal    Started                   pod/externalname-service-zpns9                                   Started container externalname-service\nservices-5413                        56s         Normal    SuccessfulCreate          replicationcontroller/externalname-service                       Created pod: externalname-service-zpns9\nservices-5413                        56s         Normal    SuccessfulCreate          replicationcontroller/externalname-service                       Created pod: externalname-service-5f6kw\nservices-6640                        5s          Normal    Scheduled                 pod/service-headless-9jv9h                                       Successfully assigned services-6640/service-headless-9jv9h to bootstrap-e2e-minion-group-l1kf\nservices-6640                        3s          Normal    Pulled                    pod/service-headless-9jv9h                                       Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-6640                        3s          Normal    Created                   pod/service-headless-9jv9h                                       Created container service-headless\nservices-6640                        3s          Normal    Started                   pod/service-headless-9jv9h                                       Started container service-headless\nservices-6640                        5s          Normal    Scheduled                 pod/service-headless-tcrtp                                       Successfully assigned services-6640/service-headless-tcrtp to bootstrap-e2e-minion-group-mp1q\nservices-6640                        3s          Normal    Pulled                    pod/service-headless-tcrtp                                       Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-6640                        3s          Normal    Created                   pod/service-headless-tcrtp                                       Created container service-headless\nservices-6640                        5s          Normal    Scheduled                 pod/service-headless-z9tc7                                       Successfully assigned services-6640/service-headless-z9tc7 to bootstrap-e2e-minion-group-hs9p\nservices-6640                        1s          Normal    Pulled                    pod/service-headless-z9tc7                                       Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-6640                        1s          Normal    Created                   pod/service-headless-z9tc7                                       Created container service-headless\nservices-6640                        5s          Normal    SuccessfulCreate          replicationcontroller/service-headless                           Created pod: service-headless-tcrtp\nservices-6640                        5s          Normal    SuccessfulCreate          replicationcontroller/service-headless                           Created pod: service-headless-z9tc7\nservices-6640                        5s          Normal    SuccessfulCreate          replicationcontroller/service-headless                           Created pod: service-headless-9jv9h\nstatefulset-1548                     118s        Normal    Scheduled                 pod/ss2-0                                                        Successfully assigned statefulset-1548/ss2-0 to bootstrap-e2e-minion-group-l1kf\nstatefulset-1548                     117s        Warning   FailedMount               pod/ss2-0                                                        MountVolume.SetUp failed for volume \"default-token-sslzp\" : failed to sync secret cache: timed out waiting for the condition\nstatefulset-1548                     114s        Normal    Pulled                    pod/ss2-0                                                        Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-1548                     114s        Normal    Created                   pod/ss2-0                                                        Created container webserver\nstatefulset-1548                     114s        Normal    Started                   pod/ss2-0                                                        Started container webserver\nstatefulset-1548                     69s         Normal    Killing                   pod/ss2-0                                                        Stopping container webserver\nstatefulset-1548                     69s         Normal    Scheduled                 pod/ss2-0                                                        Successfully assigned statefulset-1548/ss2-0 to bootstrap-e2e-minion-group-l1kf\nstatefulset-1548                     66s         Normal    Pulled                    pod/ss2-0                                                        Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-1548                     66s         Normal    Created                   pod/ss2-0                                                        Created container webserver\nstatefulset-1548                     65s         Normal    Started                   pod/ss2-0                                                        Started container webserver\nstatefulset-1548                     14s         Normal    Killing                   pod/ss2-0                                                        Stopping container webserver\nstatefulset-1548                     15s         Warning   Unhealthy                 pod/ss2-0                                                        Readiness probe failed: Get http://10.64.3.23:80/index.html: read tcp 10.64.3.1:57994->10.64.3.23:80: read: connection reset by peer\nstatefulset-1548                     15s         Normal    Scheduled                 pod/ss2-0                                                        Successfully assigned statefulset-1548/ss2-0 to bootstrap-e2e-minion-group-hs9p\nstatefulset-1548                     10s         Normal    Pulling                   pod/ss2-0                                                        Pulling image \"docker.io/library/httpd:2.4.39-alpine\"\nstatefulset-1548                     111s        Normal    Scheduled                 pod/ss2-1                                                        Successfully assigned statefulset-1548/ss2-1 to bootstrap-e2e-minion-group-hs9p\nstatefulset-1548                     109s        Normal    Pulling                   pod/ss2-1                                                        Pulling image \"docker.io/library/httpd:2.4.38-alpine\"\nstatefulset-1548                     98s         Normal    Pulled                    pod/ss2-1                                                        Successfully pulled image \"docker.io/library/httpd:2.4.38-alpine\"\nstatefulset-1548                     95s         Normal    Created                   pod/ss2-1                                                        Created container webserver\nstatefulset-1548                     95s         Normal    Started                   pod/ss2-1                                                        Started container webserver\nstatefulset-1548                     68s         Normal    Killing                   pod/ss2-1                                                        Stopping container webserver\nstatefulset-1548                     63s         Normal    Scheduled                 pod/ss2-1                                                        Successfully assigned statefulset-1548/ss2-1 to bootstrap-e2e-minion-group-hs9p\nstatefulset-1548                     59s         Normal    Pulled                    pod/ss2-1                                                        Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-1548                     59s         Normal    Created                   pod/ss2-1                                                        Created container webserver\nstatefulset-1548                     58s         Normal    Started                   pod/ss2-1                                                        Started container webserver\nstatefulset-1548                     15s         Normal    Killing                   pod/ss2-1                                                        Stopping container webserver\nstatefulset-1548                     87s         Normal    Scheduled                 pod/ss2-2                                                        Successfully assigned statefulset-1548/ss2-2 to bootstrap-e2e-minion-group-hs9p\nstatefulset-1548                     84s         Normal    Pulled                    pod/ss2-2                                                        Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-1548                     84s         Normal    Created                   pod/ss2-2                                                        Created container webserver\nstatefulset-1548                     83s         Normal    Started                   pod/ss2-2                                                        Started container webserver\nstatefulset-1548                     68s         Normal    Killing                   pod/ss2-2                                                        Stopping container webserver\nstatefulset-1548                     52s         Normal    Scheduled                 pod/ss2-2                                                        Successfully assigned statefulset-1548/ss2-2 to bootstrap-e2e-minion-group-cksd\nstatefulset-1548                     50s         Normal    Pulling                   pod/ss2-2                                                        Pulling image \"docker.io/library/httpd:2.4.38-alpine\"\nstatefulset-1548                     40s         Normal    Pulled                    pod/ss2-2                                                        Successfully pulled image \"docker.io/library/httpd:2.4.38-alpine\"\nstatefulset-1548                     40s         Normal    Created                   pod/ss2-2                                                        Created container webserver\nstatefulset-1548                     40s         Normal    Started                   pod/ss2-2                                                        Started container webserver\nstatefulset-1548                     15s         Normal    Killing                   pod/ss2-2                                                        Stopping container webserver\nstatefulset-1548                     15s         Normal    SuccessfulCreate          statefulset/ss2                                                  create Pod ss2-0 in StatefulSet ss2 successful\nstatefulset-1548                     63s         Normal    SuccessfulCreate          statefulset/ss2                                                  create Pod ss2-1 in StatefulSet ss2 successful\nstatefulset-1548                     52s         Normal    SuccessfulCreate          statefulset/ss2                                                  create Pod ss2-2 in StatefulSet ss2 successful\nstatefulset-1548                     14s         Warning   FailedToUpdateEndpoint    endpoints/test                                                   Failed to update endpoint statefulset-1548/test: Operation cannot be fulfilled on endpoints \"test\": the object has been modified; please apply your changes to the latest version and try again\nvar-expansion-4925                   9s          Normal    Scheduled                 pod/var-expansion-a6c1baff-0b50-47da-a2db-a08cd08d738a           Successfully assigned var-expansion-4925/var-expansion-a6c1baff-0b50-47da-a2db-a08cd08d738a to bootstrap-e2e-minion-group-cksd\nvar-expansion-4925                   8s          Warning   FailedMount               pod/var-expansion-a6c1baff-0b50-47da-a2db-a08cd08d738a           MountVolume.SetUp failed for volume \"default-token-rqvdh\" : failed to sync secret cache: timed out waiting for the condition\nvar-expansion-4925                   2s          Normal    Pulled                    pod/var-expansion-a6c1baff-0b50-47da-a2db-a08cd08d738a           Container image \"docker.io/library/busybox:1.29\" already present on machine\nvar-expansion-4925                   2s          Normal    Created                   pod/var-expansion-a6c1baff-0b50-47da-a2db-a08cd08d738a           Created container dapi-container\nvar-expansion-4925                   1s          Normal    Started                   pod/var-expansion-a6c1baff-0b50-47da-a2db-a08cd08d738a           Started container dapi-container\nvolume-2853                          30s         Normal    Scheduled                 pod/exec-volume-test-preprovisionedpv-rg5z                       Successfully assigned volume-2853/exec-volume-test-preprovisionedpv-rg5z to bootstrap-e2e-minion-group-cksd\nvolume-2853                          30s         Warning   FailedMount               pod/exec-volume-test-preprovisionedpv-rg5z                       Unable to attach or mount volumes: unmounted volumes=[default-token-72rwq vol1], unattached volumes=[default-token-72rwq vol1]: error processing PVC volume-2853/pvc-x2vpz: failed to fetch PVC from API server: persistentvolumeclaims \"pvc-x2vpz\" is forbidden: User \"system:node:bootstrap-e2e-minion-group-cksd\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"volume-2853\": no relationship found between node \"bootstrap-e2e-minion-group-cksd\" and this object\nvolume-2853                          25s         Normal    SuccessfulAttachVolume    pod/exec-volume-test-preprovisionedpv-rg5z                       AttachVolume.Attach succeeded for volume \"gcepd-gx77m\"\nvolume-2853                          14s         Normal    Pulled                    pod/exec-volume-test-preprovisionedpv-rg5z                       Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nvolume-2853                          14s         Normal    Created                   pod/exec-volume-test-preprovisionedpv-rg5z                       Created container exec-container-preprovisionedpv-rg5z\nvolume-2853                          13s         Normal    Started                   pod/exec-volume-test-preprovisionedpv-rg5z                       Started container exec-container-preprovisionedpv-rg5z\nvolume-2853                          48s         Warning   ProvisioningFailed        persistentvolumeclaim/pvc-x2vpz                                  storageclass.storage.k8s.io \"volume-2853\" not found\nvolume-3746                          60s         Normal    Scheduled                 pod/gcepd-client                                                 Successfully assigned volume-3746/gcepd-client to bootstrap-e2e-minion-group-l1kf\nvolume-3746                          54s         Normal    SuccessfulAttachVolume    pod/gcepd-client                                                 AttachVolume.Attach succeeded for volume \"gcepd-volume-0\"\nvolume-3746                          39s         Normal    Pulled                    pod/gcepd-client                                                 Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-3746                          39s         Normal    Created                   pod/gcepd-client                                                 Created container gcepd-client\nvolume-3746                          38s         Normal    Started                   pod/gcepd-client                                                 Started container gcepd-client\nvolume-3746                          26s         Normal    Killing                   pod/gcepd-client                                                 Stopping container gcepd-client\nvolume-3746                          108s        Normal    Scheduled                 pod/gcepd-injector                                               Successfully assigned volume-3746/gcepd-injector to bootstrap-e2e-minion-group-hs9p\nvolume-3746                          100s        Normal    SuccessfulAttachVolume    pod/gcepd-injector                                               AttachVolume.Attach succeeded for volume \"gcepd-volume-0\"\nvolume-3746                          92s         Normal    Pulled                    pod/gcepd-injector                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-3746                          91s         Normal    Created                   pod/gcepd-injector                                               Created container gcepd-injector\nvolume-3746                          90s         Normal    Started                   pod/gcepd-injector                                               Started container gcepd-injector\nvolume-3746                          73s         Normal    Killing                   pod/gcepd-injector                                               Stopping container gcepd-injector\nvolume-6261                          43s         Normal    Scheduled                 pod/gluster-client                                               Successfully assigned volume-6261/gluster-client to bootstrap-e2e-minion-group-l1kf\nvolume-6261                          38s         Normal    Pulled                    pod/gluster-client                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-6261                          38s         Normal    Created                   pod/gluster-client                                               Created container gluster-client\nvolume-6261                          36s         Normal    Started                   pod/gluster-client                                               Started container gluster-client\nvolume-6261                          27s         Normal    Killing                   pod/gluster-client                                               Stopping container gluster-client\nvolume-6261                          75s         Normal    Scheduled                 pod/gluster-injector                                             Successfully assigned volume-6261/gluster-injector to bootstrap-e2e-minion-group-hs9p\nvolume-6261                          71s         Normal    Pulled                    pod/gluster-injector                                             Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-6261                          71s         Normal    Created                   pod/gluster-injector                                             Created container gluster-injector\nvolume-6261                          70s         Normal    Started                   pod/gluster-injector                                             Started container gluster-injector\nvolume-6261                          56s         Normal    Killing                   pod/gluster-injector                                             Stopping container gluster-injector\nvolume-6261                          110s        Normal    Scheduled                 pod/gluster-server                                               Successfully assigned volume-6261/gluster-server to bootstrap-e2e-minion-group-l1kf\nvolume-6261                          107s        Normal    Pulling                   pod/gluster-server                                               Pulling image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\"\nvolume-6261                          90s         Normal    Pulled                    pod/gluster-server                                               Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\"\nvolume-6261                          90s         Normal    Created                   pod/gluster-server                                               Created container gluster-server\nvolume-6261                          90s         Normal    Started                   pod/gluster-server                                               Started container gluster-server\nvolume-6261                          14s         Normal    Killing                   pod/gluster-server                                               Stopping container gluster-server\nvolume-6261                          86s         Warning   ProvisioningFailed        persistentvolumeclaim/pvc-5clft                                  storageclass.storage.k8s.io \"volume-6261\" not found\nvolume-8901                          45s         Normal    Scheduled                 pod/exec-volume-test-preprovisionedpv-8jlb                       Successfully assigned volume-8901/exec-volume-test-preprovisionedpv-8jlb to bootstrap-e2e-minion-group-cksd\nvolume-8901                          40s         Normal    SuccessfulAttachVolume    pod/exec-volume-test-preprovisionedpv-8jlb                       AttachVolume.Attach succeeded for volume \"gcepd-52jd5\"\nvolume-8901                          33s         Normal    Pulling                   pod/exec-volume-test-preprovisionedpv-8jlb                       Pulling image \"docker.io/library/nginx:1.14-alpine\"\nvolume-8901                          31s         Normal    Pulled                    pod/exec-volume-test-preprovisionedpv-8jlb                       Successfully pulled image \"docker.io/library/nginx:1.14-alpine\"\nvolume-8901                          31s         Normal    Created                   pod/exec-volume-test-preprovisionedpv-8jlb                       Created container exec-container-preprovisionedpv-8jlb\nvolume-8901                          30s         Normal    Started                   pod/exec-volume-test-preprovisionedpv-8jlb                       Started container exec-container-preprovisionedpv-8jlb\nvolume-8901                          61s         Warning   ProvisioningFailed        persistentvolumeclaim/pvc-lsmb2                                  storageclass.storage.k8s.io \"volume-8901\" not found\nvolume-expand-7397                   19s         Normal    Pulled                    pod/csi-hostpath-attacher-0                                      Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\nvolume-expand-7397                   18s         Normal    Created                   pod/csi-hostpath-attacher-0                                      Created container csi-attacher\nvolume-expand-7397                   17s         Normal    Started                   pod/csi-hostpath-attacher-0                                      Started container csi-attacher\nvolume-expand-7397                   24s         Warning   FailedCreate              statefulset/csi-hostpath-attacher                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-7397                   24s         Normal    SuccessfulCreate          statefulset/csi-hostpath-attacher                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nvolume-expand-7397                   19s         Normal    Pulled                    pod/csi-hostpath-provisioner-0                                   Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nvolume-expand-7397                   19s         Normal    Created                   pod/csi-hostpath-provisioner-0                                   Created container csi-provisioner\nvolume-expand-7397                   17s         Normal    Started                   pod/csi-hostpath-provisioner-0                                   Started container csi-provisioner\nvolume-expand-7397                   24s         Warning   FailedCreate              statefulset/csi-hostpath-provisioner                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-7397                   24s         Normal    SuccessfulCreate          statefulset/csi-hostpath-provisioner                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nvolume-expand-7397                   19s         Normal    Pulling                   pod/csi-hostpath-resizer-0                                       Pulling image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\nvolume-expand-7397                   9s          Normal    Pulled                    pod/csi-hostpath-resizer-0                                       Successfully pulled image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\nvolume-expand-7397                   8s          Normal    Created                   pod/csi-hostpath-resizer-0                                       Created container csi-resizer\nvolume-expand-7397                   7s          Normal    Started                   pod/csi-hostpath-resizer-0                                       Started container csi-resizer\nvolume-expand-7397                   24s         Warning   FailedCreate              statefulset/csi-hostpath-resizer                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-7397                   24s         Normal    SuccessfulCreate          statefulset/csi-hostpath-resizer                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nvolume-expand-7397                   19s         Normal    ExternalProvisioning      persistentvolumeclaim/csi-hostpath7j47j                          waiting for a volume to be created, either by external provisioner \"csi-hostpath-volume-expand-7397\" or manually created by system administrator\nvolume-expand-7397                   6s          Normal    Provisioning              persistentvolumeclaim/csi-hostpath7j47j                          External provisioner is provisioning volume for claim \"volume-expand-7397/csi-hostpath7j47j\"\nvolume-expand-7397                   6s          Normal    ProvisioningSucceeded     persistentvolumeclaim/csi-hostpath7j47j                          Successfully provisioned volume pvc-2739630e-b6fe-4a99-ba27-fd141f08b6ba\nvolume-expand-7397                   20s         Normal    Pulled                    pod/csi-hostpathplugin-0                                         Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nvolume-expand-7397                   20s         Normal    Created                   pod/csi-hostpathplugin-0                                         Created container node-driver-registrar\nvolume-expand-7397                   19s         Normal    Started                   pod/csi-hostpathplugin-0                                         Started container node-driver-registrar\nvolume-expand-7397                   19s         Normal    Pulling                   pod/csi-hostpathplugin-0                                         Pulling image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nvolume-expand-7397                   7s          Normal    Pulled                    pod/csi-hostpathplugin-0                                         Successfully pulled image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nvolume-expand-7397                   7s          Normal    Created                   pod/csi-hostpathplugin-0                                         Created container hostpath\nvolume-expand-7397                   7s          Normal    Started                   pod/csi-hostpathplugin-0                                         Started container hostpath\nvolume-expand-7397                   7s          Normal    Pulling                   pod/csi-hostpathplugin-0                                         Pulling image \"quay.io/k8scsi/livenessprobe:v1.1.0\"\nvolume-expand-7397                   4s          Normal    Pulled                    pod/csi-hostpathplugin-0                                         Successfully pulled image \"quay.io/k8scsi/livenessprobe:v1.1.0\"\nvolume-expand-7397                   4s          Normal    Created                   pod/csi-hostpathplugin-0                                         Created container liveness-probe\nvolume-expand-7397                   3s          Normal    Started                   pod/csi-hostpathplugin-0                                         Started container liveness-probe\nvolume-expand-7397                   25s         Normal    SuccessfulCreate          statefulset/csi-hostpathplugin                                   create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nvolume-expand-7397                   19s         Normal    Pulling                   pod/csi-snapshotter-0                                            Pulling image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\"\nvolume-expand-7397                   8s          Normal    Pulled                    pod/csi-snapshotter-0                                            Successfully pulled image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\"\nvolume-expand-7397                   8s          Normal    Created                   pod/csi-snapshotter-0                                            Created container csi-snapshotter\nvolume-expand-7397                   8s          Normal    Started                   pod/csi-snapshotter-0                                            Started container csi-snapshotter\nvolume-expand-7397                   24s         Normal    SuccessfulCreate          statefulset/csi-snapshotter                                      create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nvolume-expand-7397                   1s          Normal    SuccessfulAttachVolume    pod/security-context-63583541-d77b-4c66-8c99-cbfe3b028ce8        AttachVolume.Attach succeeded for volume \"pvc-2739630e-b6fe-4a99-ba27-fd141f08b6ba\"\nvolumemode-5109                      8s          Normal    LeaderElection            endpoints/example.com-nfs-volumemode-5109                        external-provisioner-m5hmh_e26cc683-dc14-4ae3-b5df-27778112aa5c became leader\nvolumemode-5109                      49s         Normal    Scheduled                 pod/external-provisioner-m5hmh                                   Successfully assigned volumemode-5109/external-provisioner-m5hmh to bootstrap-e2e-minion-group-l1kf\nvolumemode-5109                      45s         Normal    Pulling                   pod/external-provisioner-m5hmh                                   Pulling image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\"\nvolumemode-5109                      19s         Normal    Pulled                    pod/external-provisioner-m5hmh                                   Successfully pulled image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\"\nvolumemode-5109                      18s         Normal    Created                   pod/external-provisioner-m5hmh                                   Created container nfs-provisioner\nvolumemode-5109                      17s         Normal    Started                   pod/external-provisioner-m5hmh                                   Started container nfs-provisioner\nvolumemode-5109                      5s          Normal    ExternalProvisioning      persistentvolumeclaim/nfswjqhh                                   waiting for a volume to be created, either by external provisioner \"example.com/nfs-volumemode-5109\" or manually created by system administrator\nvolumemode-5109                      5s          Normal    Provisioning              persistentvolumeclaim/nfswjqhh                                   External provisioner is provisioning volume for claim \"volumemode-5109/nfswjqhh\"\nvolumemode-5109                      5s          Normal    ProvisioningSucceeded     persistentvolumeclaim/nfswjqhh                                   Successfully provisioned volume pvc-c8430c3e-84eb-41d5-afb4-5e89faec694b\nvolumemode-7125                      17s         Normal    WaitForFirstConsumer      persistentvolumeclaim/gcepd92pfc                                 waiting for first consumer to be created before binding\nvolumemode-7125                      14s         Normal    ProvisioningSucceeded     persistentvolumeclaim/gcepd92pfc                                 Successfully provisioned volume pvc-2ab45895-a7b3-44a4-aaf3-513beddf9879 using kubernetes.io/gce-pd\nvolumemode-7125                      13s         Normal    Scheduled                 pod/security-context-898b7684-6268-4bbe-b76f-3f336165b576        Successfully assigned volumemode-7125/security-context-898b7684-6268-4bbe-b76f-3f336165b576 to bootstrap-e2e-minion-group-hs9p\nvolumemode-7125                      10s         Normal    Pulled                    pod/security-context-898b7684-6268-4bbe-b76f-3f336165b576        Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolumemode-7125                      9s          Normal    Created                   pod/security-context-898b7684-6268-4bbe-b76f-3f336165b576        Created container write-pod\nvolumemode-7125                      8s          Normal    SuccessfulAttachVolume    pod/security-context-898b7684-6268-4bbe-b76f-3f336165b576        AttachVolume.Attach succeeded for volume \"pvc-2ab45895-a7b3-44a4-aaf3-513beddf9879\"\nvolumemode-7125                      8s          Normal    Started                   pod/security-context-898b7684-6268-4bbe-b76f-3f336165b576        Started container write-pod\nvolumemode-8148                      111s        Normal    Scheduled                 pod/gluster-server                                               Successfully assigned volumemode-8148/gluster-server to bootstrap-e2e-minion-group-hs9p\nvolumemode-8148                      109s        Normal    Pulling                   pod/gluster-server                                               Pulling image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\"\nvolumemode-8148                      79s         Normal    Pulled                    pod/gluster-server                                               Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\"\nvolumemode-8148                      79s         Normal    Created                   pod/gluster-server                                               Created container gluster-server\nvolumemode-8148                      77s         Normal    Started                   pod/gluster-server                                               Started container gluster-server\nvolumemode-8148                      27s         Normal    Killing                   pod/gluster-server                                               Stopping container gluster-server\nvolumemode-8148                      46s         Normal    Pulled                    pod/hostexec-bootstrap-e2e-minion-group-l1kf-glhkg               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolumemode-8148                      46s         Normal    Created                   pod/hostexec-bootstrap-e2e-minion-group-l1kf-glhkg               Created container agnhost\nvolumemode-8148                      46s         Normal    Started                   pod/hostexec-bootstrap-e2e-minion-group-l1kf-glhkg               Started container agnhost\nvolumemode-8148                      36s         Normal    Killing                   pod/hostexec-bootstrap-e2e-minion-group-l1kf-glhkg               Stopping container agnhost\nvolumemode-8148                      72s         Warning   ProvisioningFailed        persistentvolumeclaim/pvc-8gj7n                                  storageclass.storage.k8s.io \"volumemode-8148\" not found\nvolumemode-8148                      59s         Normal    Scheduled                 pod/security-context-53d44518-2d4b-4382-8464-484e8a7b73a8        Successfully assigned volumemode-8148/security-context-53d44518-2d4b-4382-8464-484e8a7b73a8 to bootstrap-e2e-minion-group-l1kf\nvolumemode-8148                      57s         Normal    Pulled                    pod/security-context-53d44518-2d4b-4382-8464-484e8a7b73a8        Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolumemode-8148                      57s         Normal    Created                   pod/security-context-53d44518-2d4b-4382-8464-484e8a7b73a8        Created container write-pod\nvolumemode-8148                      56s         Normal    Started                   pod/security-context-53d44518-2d4b-4382-8464-484e8a7b73a8        Started container write-pod\nvolumemode-8148                      36s         Normal    Killing                   pod/security-context-53d44518-2d4b-4382-8464-484e8a7b73a8        Stopping container write-pod\nwebhook-1939                         57s         Normal    Scheduled                 pod/sample-webhook-deployment-5f65f8c764-qxx45                   Successfully assigned webhook-1939/sample-webhook-deployment-5f65f8c764-qxx45 to bootstrap-e2e-minion-group-l1kf\nwebhook-1939                         53s         Normal    Pulled                    pod/sample-webhook-deployment-5f65f8c764-qxx45                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nwebhook-1939                         52s         Normal    Created                   pod/sample-webhook-deployment-5f65f8c764-qxx45                   Created container sample-webhook\nwebhook-1939                         51s         Normal    Started                   pod/sample-webhook-deployment-5f65f8c764-qxx45                   Started container sample-webhook\nwebhook-1939                         57s         Normal    SuccessfulCreate          replicaset/sample-webhook-deployment-5f65f8c764                  Created pod: sample-webhook-deployment-5f65f8c764-qxx45\nwebhook-1939                         57s         Normal    ScalingReplicaSet         deployment/sample-webhook-deployment                             Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1\nwebhook-8375                         8s          Normal    Scheduled                 pod/sample-webhook-deployment-5f65f8c764-skbqc                   Successfully assigned webhook-8375/sample-webhook-deployment-5f65f8c764-skbqc to bootstrap-e2e-minion-group-mp1q\nwebhook-8375                         7s          Warning   FailedMount               pod/sample-webhook-deployment-5f65f8c764-skbqc                   MountVolume.SetUp failed for volume \"webhook-certs\" : failed to sync secret cache: timed out waiting for the condition\nwebhook-8375                         7s          Warning   FailedMount               pod/sample-webhook-deployment-5f65f8c764-skbqc                   MountVolume.SetUp failed for volume \"default-token-jjnp7\" : failed to sync secret cache: timed out waiting for the condition\nwebhook-8375                         5s          Normal    Pulled                    pod/sample-webhook-deployment-5f65f8c764-skbqc                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nwebhook-8375                         4s          Normal    Created                   pod/sample-webhook-deployment-5f65f8c764-skbqc                   Created container sample-webhook\nwebhook-8375                         4s          Normal    Started                   pod/sample-webhook-deployment-5f65f8c764-skbqc                   Started container sample-webhook\nwebhook-8375                         8s          Normal    SuccessfulCreate          replicaset/sample-webhook-deployment-5f65f8c764                  Created pod: sample-webhook-deployment-5f65f8c764-skbqc\nwebhook-8375                         9s          Normal    ScalingReplicaSet         deployment/sample-webhook-deployment                             Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1\nwebhook-8725                         1s          Normal    Scheduled                 pod/sample-webhook-deployment-5f65f8c764-jbmhc                   Successfully assigned webhook-8725/sample-webhook-deployment-5f65f8c764-jbmhc to bootstrap-e2e-minion-group-hs9p\nwebhook-8725                         2s          Normal    SuccessfulCreate          replicaset/sample-webhook-deployment-5f65f8c764                  Created pod: sample-webhook-deployment-5f65f8c764-jbmhc\nwebhook-8725                         3s          Normal    ScalingReplicaSet         deployment/sample-webhook-deployment                             Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1\n"
Jan 17 13:35:00.555: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config get horizontalpodautoscalers --all-namespaces'
Jan 17 13:35:01.004: INFO: stderr: ""
Jan 17 13:35:01.004: INFO: stdout: "NAMESPACE      NAME             REFERENCE         TARGETS         MINPODS   MAXPODS   REPLICAS   AGE\nkubectl-2531   hpa2mt9p7dghkt   something/cross   <unknown>/80%   1         3         0          0s\n"
Jan 17 13:35:01.612: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config get jobs --all-namespaces'
Jan 17 13:35:02.062: INFO: stderr: ""
Jan 17 13:35:02.062: INFO: stdout: "NAMESPACE      NAME                  COMPLETIONS   DURATION   AGE\njob-2740       fail-once-non-local   4/4           14s        29s\njob-3474       fail-once-local       4/4           22s        61s\nkubectl-2531   job1mt9p7dghkt        0/1                      1s\n"
Jan 17 13:35:02.529: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config get cronjobs --all-namespaces'
Jan 17 13:35:03.309: INFO: stderr: ""
Jan 17 13:35:03.309: INFO: stdout: "NAMESPACE      NAME                        SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE\ncronjob-1058   failed-jobs-history-limit   */1 * * * ?   False     0        <none>          18s\ncronjob-2900   concurrent                  */1 * * * ?   False     0        <none>          62s\nkubectl-2531   cjv1beta1mt9p7dghkt         * * * * *     False     0        <none>          1s\n"
Jan 17 13:35:04.106: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config get certificatesigningrequests --all-namespaces'
Jan 17 13:35:04.706: INFO: stderr: ""
Jan 17 13:35:04.706: INFO: stdout: "NAME                                                   AGE     REQUESTOR   CONDITION\ncsr1mt9p7dghkt                                         1s      kubecfg     Pending\nnode-csr-Ljo6cietp0W-Gg33Mf8rswCPQbSgh7u4eBAxM0eN4rE   5m24s   kubelet     Approved,Issued\nnode-csr-_wA5MKS2VuJSgXO-atEytaI7t3g6Uri6QGUy7B9rrJg   5m25s   kubelet     Approved,Issued\nnode-csr-gE_IL6CwFp9G24OsHW6Xc2VNYywE_CJEfelA52J-6yg   5m24s   kubelet     Approved,Issued\nnode-csr-jb5oZY_5-WEPdoFMm8TTXM0BZ5vVi0Wn9KltsR6_xNE   5m24s   kubelet     Approved,Issued\n"
Jan 17 13:35:05.481: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config get networkpolicies --all-namespaces'
Jan 17 13:35:05.848: INFO: stderr: ""
Jan 17 13:35:05.848: INFO: stdout: "NAMESPACE      NAME            POD-SELECTOR   AGE\nkubectl-2531   np2mt9p7dghkt   e=f            0s\n"
... skipping 53 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  kubectl get output
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:424
    should contain custom columns for each resource
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:425
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client kubectl get output should contain custom columns for each resource","total":-1,"completed":4,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:35:20.488: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:35:20.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 164 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on localhost
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:466
    should support forwarding over websockets
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:482
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets","total":-1,"completed":2,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:87
Jan 17 13:35:21.224: INFO: Driver "csi-hostpath" does not support topology - skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
... skipping 5 lines ...
[sig-storage] CSI Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver "csi-hostpath" does not support topology - skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:96
------------------------------
... skipping 118 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":5,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:35:23.182: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:35:23.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 255 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:56
  GlusterFS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:124
    should be mountable
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:125
------------------------------
{"msg":"PASSED [sig-storage] GCP Volumes GlusterFS should be mountable","total":-1,"completed":4,"skipped":28,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 85 lines ...
• [SLOW TEST:18.598 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":17,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 29 lines ...
• [SLOW TEST:21.060 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":6,"skipped":36,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:35:32.799: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 6 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 23 lines ...
• [SLOW TEST:31.498 seconds]
[sig-apps] ReplicationController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a private image
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:53
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a private image","total":-1,"completed":7,"skipped":57,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 4 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:35:33.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json,application/vnd.kubernetes.protobuf\"","total":-1,"completed":7,"skipped":41,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:35:33.466: INFO: Driver nfs doesn't support ntfs -- skipping
... skipping 174 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":4,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:35:33.484: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 39 lines ...
• [SLOW TEST:13.456 seconds]
[sig-auth] Certificates API
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should support building a client with a CSR
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:39
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:34:30.309: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 37 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":3,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:35:38.519: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:35:38.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 188 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run CronJob
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:224
    should create a CronJob
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:237
------------------------------
{"msg":"PASSED [sig-cli] Kubectl alpha client Kubectl run CronJob should create a CronJob","total":-1,"completed":5,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 118 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":2,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:35:46.648: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:35:46.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 90 lines ...
• [SLOW TEST:37.530 seconds]
[k8s.io] KubeletManagedEtcHosts
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":41,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 263 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":5,"skipped":24,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:21.924 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks succeed
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:46
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":5,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:35:50.737: INFO: Distro gci doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:35:50.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 60 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":31,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:35:31.685: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2043
... skipping 150 lines ...
• [SLOW TEST:17.198 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:105
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:35:55.943: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 107 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":7,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:7.635 seconds]
[sig-apps] ReplicaSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/replica_set.go:104
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota","total":-1,"completed":3,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 25 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl apply
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:922
    should apply a new configuration to an existing RC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:923
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":6,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 49 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl copy
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1422
    should copy a file from a running Pod
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1441
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod","total":-1,"completed":5,"skipped":34,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:35:57.793: INFO: Only supported for providers [openstack] (not gce)
... skipping 39 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":17,"failed":0}
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:35:46.177: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-5734
... skipping 23 lines ...
• [SLOW TEST:12.213 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 69 lines ...
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":3,"skipped":8,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:35:58.397: INFO: Driver hostPathSymlink doesn't support ext3 -- skipping
... skipping 79 lines ...
• [SLOW TEST:70.745 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should implement service.kubernetes.io/headless
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":4,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:01.426: INFO: Driver vsphere doesn't support ntfs -- skipping
... skipping 45 lines ...
• [SLOW TEST:43.136 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:01.665: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:36:01.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 180 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should have a working scale subresource [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":3,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
... skipping 113 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR for CRD with validation schema","total":-1,"completed":6,"skipped":31,"failed":0}
[BeforeEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:35:51.284: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-4286
... skipping 69 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:63
[It] should not launch unsafe, but not explicitly enabled sysctls on the node
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:188
STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node
STEP: Watching for error events or started pod
STEP: Checking that the pod was rejected
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:36:08.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-2296" for this suite.


• [SLOW TEST:7.373 seconds]
[k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should not launch unsafe, but not explicitly enabled sysctls on the node
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:188
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node","total":-1,"completed":7,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 98 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":3,"skipped":21,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 156 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":9,"skipped":53,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:11.231: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:36:11.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 175 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1462
    should be able to retrieve and filter logs  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":8,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 24 lines ...
• [SLOW TEST:10.247 seconds]
[sig-storage] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:11.676: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:36:11.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 256 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:240
    should require VolumeAttach for drivers with attachment
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:262
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":1,"skipped":14,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:14.044 seconds]
[sig-api-machinery] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34
  should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":53,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:11.859: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:36:11.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 127 lines ...
• [SLOW TEST:17.326 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":32,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:14.819: INFO: Only supported for providers [aws] (not gce)
... skipping 123 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":7,"skipped":62,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:15.135: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 78 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver hostPath doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 132 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":4,"skipped":10,"failed":0}
[BeforeEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:36:07.919: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9125
... skipping 21 lines ...
• [SLOW TEST:9.771 seconds]
[sig-storage] Projected secret
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:17.693: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 48 lines ...
Jan 17 13:36:14.825: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename volume-provisioning
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-provisioning-1645
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Dynamic Provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:143
[It] should report an error and create no PV
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:776
Jan 17 13:36:17.289: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [sig-storage] Dynamic Provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:36:17.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-provisioning-1645" for this suite.


S [SKIPPING] [2.952 seconds]
[sig-storage] Dynamic Provisioning
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Invalid AWS KMS key
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:775
    should report an error and create no PV [It]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:776

    Only supported for providers [aws] (not gce)

    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:777
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-auth] Certificates API should support building a client with a CSR","total":-1,"completed":5,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:35:33.978: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 55 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":6,"skipped":28,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:18.046: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 144 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":6,"skipped":24,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:19.094: INFO: Driver local doesn't support ntfs -- skipping
... skipping 90 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should implement legacy replacement when the update strategy is OnDelete
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:495
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete","total":-1,"completed":1,"skipped":14,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:19.105: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 221 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      Verify if offline PVC expansion works
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:162
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":5,"skipped":38,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:23.714: INFO: Driver cinder doesn't support ntfs -- skipping
... skipping 15 lines ...
      Driver cinder doesn't support ntfs -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:153
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":6,"skipped":51,"failed":0}
[BeforeEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:36:09.174: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3522
... skipping 24 lines ...
• [SLOW TEST:16.789 seconds]
[sig-storage] Projected configMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":51,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-apps] CronJob
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:34:43.755: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename cronjob
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-1058
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] CronJob
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:56
[It] should delete failed finished jobs with limit of one job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:245
STEP: Creating an AllowConcurrent cronjob with custom history limit
STEP: Ensuring a finished job exists
STEP: Ensuring a finished job exists by listing jobs explicitly
STEP: Ensuring this job and its pods does not exist anymore
STEP: Ensuring there is 1 finished job by listing jobs explicitly
... skipping 4 lines ...
STEP: Destroying namespace "cronjob-1058" for this suite.


• [SLOW TEST:104.062 seconds]
[sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete failed finished jobs with limit of one job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:245
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete failed finished jobs with limit of one job","total":-1,"completed":5,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:27.823: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 6 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 32 lines ...
• [SLOW TEST:20.529 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":8,"skipped":28,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:29.710: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 161 lines ...
Jan 17 13:36:15.722: INFO: Waiting for PV local-pvjsdrz to bind to PVC pvc-64tql
Jan 17 13:36:15.722: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-64tql] to have phase Bound
Jan 17 13:36:15.970: INFO: PersistentVolumeClaim pvc-64tql found but phase is Pending instead of Bound.
Jan 17 13:36:18.209: INFO: PersistentVolumeClaim pvc-64tql found and phase=Bound (2.486400135s)
Jan 17 13:36:18.209: INFO: Waiting up to 3m0s for PersistentVolume local-pvjsdrz to have phase Bound
Jan 17 13:36:18.432: INFO: PersistentVolume local-pvjsdrz found and phase=Bound (223.214743ms)
[It] should fail scheduling due to different NodeAffinity
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:360
STEP: local-volume-type: dir
STEP: Initializing test volumes
Jan 17 13:36:18.850: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-21376f81-34ff-48cf-a0e5-23affe474ddd] Namespace:persistent-local-volumes-test-8906 PodName:hostexec-bootstrap-e2e-minion-group-cksd-bzbbq ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jan 17 13:36:18.850: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Creating local PVCs and PVs
... skipping 23 lines ...

• [SLOW TEST:32.899 seconds]
[sig-storage] PersistentVolumes-local 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:338
    should fail scheduling due to different NodeAffinity
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:360
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":4,"skipped":9,"failed":0}

SSSS
------------------------------
[BeforeEach] [k8s.io] PrivilegedPod [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 21 lines ...
• [SLOW TEST:17.240 seconds]
[k8s.io] PrivilegedPod [NodeConformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should enable privileged commands [LinuxOnly]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:49
------------------------------
{"msg":"PASSED [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":7,"skipped":39,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:35.313: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 46 lines ...
• [SLOW TEST:10.877 seconds]
[sig-storage] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":43,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 55 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":7,"skipped":32,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":22,"failed":0}
[BeforeEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:35:49.115: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nettest-4369
... skipping 43 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  Granular Checks: Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:161
    should be able to handle large requests: udp
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:306
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp","total":-1,"completed":5,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:41.363: INFO: Only supported for providers [azure] (not gce)
... skipping 85 lines ...
      Only supported for providers [openstack] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1080
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":8,"skipped":49,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:35:52.103: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 97 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:444
    that expects NO client request
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:454
      should support a client that connects, sends DATA, and disconnects
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:455
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":9,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:41.679: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:36:41.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 81 lines ...
• [SLOW TEST:162.243 seconds]
[sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not emit unexpected warnings
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:172
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not emit unexpected warnings","total":-1,"completed":3,"skipped":8,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:42.842: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 54 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver hostPath doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 21 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command in a pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:43.926: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
... skipping 164 lines ...
Jan 17 13:36:33.707: INFO: Trying to get logs from node bootstrap-e2e-minion-group-hs9p pod exec-volume-test-inlinevolume-qsrl container exec-container-inlinevolume-qsrl: <nil>
STEP: delete the pod
Jan 17 13:36:34.606: INFO: Waiting for pod exec-volume-test-inlinevolume-qsrl to disappear
Jan 17 13:36:34.740: INFO: Pod exec-volume-test-inlinevolume-qsrl no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-qsrl
Jan 17 13:36:34.740: INFO: Deleting pod "exec-volume-test-inlinevolume-qsrl" in namespace "volume-4938"
Jan 17 13:36:36.199: INFO: error deleting PD "bootstrap-e2e-8d40b942-1442-4367-bc7c-4319059b904a": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-8d40b942-1442-4367-bc7c-4319059b904a' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-hs9p', resourceInUseByAnotherResource
Jan 17 13:36:36.199: INFO: Couldn't delete PD "bootstrap-e2e-8d40b942-1442-4367-bc7c-4319059b904a", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-8d40b942-1442-4367-bc7c-4319059b904a' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-hs9p', resourceInUseByAnotherResource
Jan 17 13:36:43.425: INFO: Successfully deleted PD "bootstrap-e2e-8d40b942-1442-4367-bc7c-4319059b904a".
Jan 17 13:36:43.425: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:36:43.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-4938" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":33,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:44.056: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 55 lines ...
• [SLOW TEST:33.324 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":2,"skipped":18,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:45.115: INFO: Only supported for providers [aws] (not gce)
... skipping 109 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and write from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:233
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":10,"skipped":55,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:35.366 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":10,"skipped":70,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:49.283: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:36:49.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 92 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":6,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:51.433: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:36:51.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 47 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 93 lines ...
• [SLOW TEST:6.391 seconds]
[sig-api-machinery] Watchers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":11,"skipped":56,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-scheduling] LimitRange
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 38 lines ...
• [SLOW TEST:16.191 seconds]
[sig-scheduling] LimitRange
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  should create a LimitRange with defaults and ensure pod has those defaults applied.
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/limit_range.go:55
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied.","total":-1,"completed":8,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:57.118: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:36:57.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 134 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541
    should add annotations for pods in rc  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":6,"skipped":23,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:57.552: INFO: Driver cinder doesn't support ntfs -- skipping
... skipping 37 lines ...
      Only supported for providers [openstack] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1080
------------------------------
SSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] Mounted volume expand Should verify mounted devices can be resized","total":-1,"completed":5,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:35:51.411: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 65 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":6,"skipped":32,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:36:58.379: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 37 lines ...
• [SLOW TEST:30.391 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should support cascading deletion of custom resources
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:870
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support cascading deletion of custom resources","total":-1,"completed":8,"skipped":61,"failed":0}

SS
------------------------------
[BeforeEach] [sig-instrumentation] MetricsGrabber
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:37:00.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-54" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.","total":-1,"completed":7,"skipped":36,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:37:00.650: INFO: Driver nfs doesn't support ntfs -- skipping
... skipping 38 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a container with runAsNonRoot
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:98
    should not run without a specified user ID
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:153
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":12,"skipped":62,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:37:01.181: INFO: Driver hostPathSymlink doesn't support ext4 -- skipping
... skipping 135 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":7,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:37:02.900: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:37:02.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 54 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392

      Driver hostPath doesn't support PreprovisionedPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":7,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:35:23.188: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 74 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (ext4)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ext4)] volumes should store data","total":-1,"completed":8,"skipped":12,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:37:03.290: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 105 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":6,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:37:04.130: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:37:04.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 172 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should provision a volume and schedule a pod with AllowedTopologies
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:163
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":7,"skipped":56,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:37:07.185: INFO: Only supported for providers [openstack] (not gce)
... skipping 67 lines ...
• [SLOW TEST:257.507 seconds]
[k8s.io] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":9,"skipped":49,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:36:41.611: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-5969
... skipping 104 lines ...
• [SLOW TEST:12.030 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":39,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:37:10.419: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 86 lines ...
Jan 17 13:36:19.455: INFO: stdout: ""
Jan 17 13:36:19.456: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config exec --namespace=services-4674 execpodtxdkm -- /bin/sh -x -c nc -zv -t -w 2 10.0.166.195 80'
Jan 17 13:36:22.423: INFO: stderr: "+ nc -zv -t -w 2 10.0.166.195 80\nConnection to 10.0.166.195 80 port [tcp/http] succeeded!\n"
Jan 17 13:36:22.423: INFO: stdout: ""
Jan 17 13:36:22.423: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config exec --namespace=services-4674 execpodtxdkm -- /bin/sh -x -c nc -zv -t -w 2 10.138.0.6 30385'
Jan 17 13:36:24.556: INFO: rc: 1
Jan 17 13:36:24.556: INFO: Service reachability failing with error: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config exec --namespace=services-4674 execpodtxdkm -- /bin/sh -x -c nc -zv -t -w 2 10.138.0.6 30385:
Command stdout:

stderr:
+ nc -zv -t -w 2 10.138.0.6 30385
nc: connect to 10.138.0.6 port 30385 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jan 17 13:36:25.556: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config exec --namespace=services-4674 execpodtxdkm -- /bin/sh -x -c nc -zv -t -w 2 10.138.0.6 30385'
Jan 17 13:36:28.377: INFO: stderr: "+ nc -zv -t -w 2 10.138.0.6 30385\nConnection to 10.138.0.6 30385 port [tcp/30385] succeeded!\n"
Jan 17 13:36:28.377: INFO: stdout: ""
Jan 17 13:36:28.377: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.219.215 --kubeconfig=/workspace/.kube/config exec --namespace=services-4674 execpodtxdkm -- /bin/sh -x -c nc -zv -t -w 2 10.138.0.4 30385'
... skipping 37 lines ...
• [SLOW TEST:91.992 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1530
------------------------------
{"msg":"PASSED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":4,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:37:10.546: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:37:10.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 90 lines ...
• [SLOW TEST:9.527 seconds]
[sig-node] Downward API
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":74,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:37:10.725: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:37:10.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 609 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1297
    should create services for rc  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":4,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:37:12.437: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:37:12.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 162 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":6,"skipped":47,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:37:22.759: INFO: Driver azure-disk doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:37:22.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 34 lines ...
• [SLOW TEST:18.803 seconds]
[sig-storage] EmptyDir wrapper volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":7,"skipped":38,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:13.548 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":8,"skipped":48,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Ephemeralstorage
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 19 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : secret
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":3,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 51 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":8,"skipped":46,"failed":0}

SSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] Events
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 23 lines ...
• [SLOW TEST:13.248 seconds]
[k8s.io] [sig-node] Events
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":5,"skipped":36,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:37:25.704: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 48 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on 0.0.0.0
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:444
    should support forwarding over websockets
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:460
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets","total":-1,"completed":5,"skipped":32,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 62 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":11,"skipped":74,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:37:26.583: INFO: Only supported for providers [azure] (not gce)
... skipping 138 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a container with runAsUser
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:44
    should run the container with uid 0 [LinuxOnly] [NodeConformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:93
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":8,"skipped":40,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:37:33.232: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 87 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":6,"skipped":33,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:53.520 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should support orphan deletion of custom resources
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:973
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support orphan deletion of custom resources","total":-1,"completed":10,"skipped":45,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 27 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:43
    files with FSGroup ownership should support (root,0644,tmpfs)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:62
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":6,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:37:34.473: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:37:34.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 174 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:37:34.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-273" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":50,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:37:34.619: INFO: Driver cinder doesn't support ext4 -- skipping
... skipping 96 lines ...
STEP: Deleting the previously created pod
Jan 17 13:37:06.233: INFO: Deleting pod "pvc-volume-tester-xplfc" in namespace "csi-mock-volumes-6507"
Jan 17 13:37:06.725: INFO: Wait up to 5m0s for pod "pvc-volume-tester-xplfc" to be fully deleted
STEP: Checking CSI driver logs
Jan 17 13:37:21.694: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-6507","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-6507","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-6507","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-8137e974-0777-4e93-8226-c98914086f3e","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-8137e974-0777-4e93-8226-c98914086f3e"}}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-6507","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-6507","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-8137e974-0777-4e93-8226-c98914086f3e","storage.kubernetes.io/csiProvisionerIdentity":"1579268207775-8081-csi-mock-csi-mock-volumes-6507"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8137e974-0777-4e93-8226-c98914086f3e/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-8137e974-0777-4e93-8226-c98914086f3e","storage.kubernetes.io/csiProvisionerIdentity":"1579268207775-8081-csi-mock-csi-mock-volumes-6507"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8137e974-0777-4e93-8226-c98914086f3e/globalmount","target_path":"/var/lib/kubelet/pods/76acb79b-5887-4e1b-9563-1ff19ccfa846/volumes/kubernetes.io~csi/pvc-8137e974-0777-4e93-8226-c98914086f3e/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-8137e974-0777-4e93-8226-c98914086f3e","storage.kubernetes.io/csiProvisionerIdentity":"1579268207775-8081-csi-mock-csi-mock-volumes-6507"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/76acb79b-5887-4e1b-9563-1ff19ccfa846/volumes/kubernetes.io~csi/pvc-8137e974-0777-4e93-8226-c98914086f3e/mount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8137e974-0777-4e93-8226-c98914086f3e/globalmount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerUnpublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-6507"},"Response":{},"Error":""}

Jan 17 13:37:21.694: INFO: Found NodeUnpublishVolume: {Method:/csi.v1.Node/NodeUnpublishVolume Request:{VolumeContext:map[]}}
STEP: Deleting pod pvc-volume-tester-xplfc
Jan 17 13:37:21.694: INFO: Deleting pod "pvc-volume-tester-xplfc" in namespace "csi-mock-volumes-6507"
STEP: Deleting claim pvc-6qv95
Jan 17 13:37:22.080: INFO: Waiting up to 2m0s for PersistentVolume pvc-8137e974-0777-4e93-8226-c98914086f3e to get deleted
... skipping 74 lines ...
• [SLOW TEST:13.944 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":26,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:37:38.021: INFO: Only supported for providers [openstack] (not gce)
... skipping 93 lines ...
• [SLOW TEST:35.461 seconds]
[sig-network] Network
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should set TCP CLOSE_WAIT timeout
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:54
------------------------------
{"msg":"PASSED [sig-network] Network should set TCP CLOSE_WAIT timeout","total":-1,"completed":9,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
... skipping 85 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:444
    that expects a client request
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:445
      should support a client that connects, sends NO DATA, and disconnects
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:446
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":12,"skipped":78,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:37:46.086: INFO: Distro gci doesn't support ntfs -- skipping
... skipping 63 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:466
    that expects NO client request
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:476
      should support a client that connects, sends DATA, and disconnects
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:477
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":9,"skipped":49,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 66 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":9,"skipped":45,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:37:51.089: INFO: Distro gci doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:37:51.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 159 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":9,"skipped":63,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:37:51.453: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:37:51.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 67 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":10,"skipped":52,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:37:54.981: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:37:54.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 184 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:121
    with multiple PVs and PVCs all in same ns
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:211
      should create 3 PVs and 3 PVCs: test write access
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:242
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","total":-1,"completed":6,"skipped":18,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:37:56.571: INFO: Distro gci doesn't support ntfs -- skipping
... skipping 85 lines ...
      Only supported for providers [vsphere] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1383
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":10,"skipped":47,"failed":0}
[BeforeEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:37:31.856: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-8127
... skipping 24 lines ...
• [SLOW TEST:26.552 seconds]
[k8s.io] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should contain environment variables for services [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":47,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:37:58.412: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 56 lines ...
• [SLOW TEST:24.786 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":11,"skipped":49,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:37:58.924: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 160 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 43 lines ...
• [SLOW TEST:11.673 seconds]
[k8s.io] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should get a host IP [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":61,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 80 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1731
    should create a deployment from an image  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob","total":-1,"completed":8,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:38:07.957: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:38:07.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 10 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:162

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":-1,"completed":7,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 64 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":7,"skipped":48,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:38:09.528: INFO: Driver cinder doesn't support ext4 -- skipping
... skipping 88 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":10,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:38:09.760: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:38:09.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 54 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:38:12.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3487" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":11,"skipped":28,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:38:12.742: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 103 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":8,"skipped":39,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":8,"skipped":59,"failed":0}
[BeforeEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:36:46.135: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nettest-8817
... skipping 28 lines ...
STEP: Creating the service on top of the pods in kubernetes
Jan 17 13:37:20.790: INFO: Service node-port-service in namespace nettest-8817 found.
Jan 17 13:37:21.607: INFO: Service session-affinity-service in namespace nettest-8817 found.
STEP: dialing(udp) test-container-pod --> 10.0.161.47:90
Jan 17 13:37:21.847: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.3.87:8080/dial?request=hostName&protocol=udp&host=10.0.161.47&port=90&tries=1'] Namespace:nettest-8817 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 13:37:21.847: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 13:37:28.108: INFO: Tries: 10, in try: 0, stdout: {"errors":["reading from udp connection failed. err:'read udp 10.64.3.87:48072-\u003e10.0.161.47:90: i/o timeout'"]}, stderr: , command run in: (*v1.Pod)(nil)
Jan 17 13:37:30.686: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.3.87:8080/dial?request=hostName&protocol=udp&host=10.0.161.47&port=90&tries=1'] Namespace:nettest-8817 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 13:37:30.686: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 13:37:33.965: INFO: Tries: 10, in try: 1, stdout: {"responses":["netserver-0"]}, stderr: , command run in: (*v1.Pod)(nil)
Jan 17 13:37:36.130: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.3.87:8080/dial?request=hostName&protocol=udp&host=10.0.161.47&port=90&tries=1'] Namespace:nettest-8817 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 13:37:36.130: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 13:37:38.440: INFO: Tries: 10, in try: 2, stdout: {"responses":["netserver-0"]}, stderr: , command run in: (*v1.Pod)(nil)
... skipping 133 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: inline ephemeral CSI volume] ephemeral
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support multiple inline ephemeral volumes
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:177
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":5,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:38:15.947: INFO: Driver cinder doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:38:15.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 44 lines ...
• [SLOW TEST:13.266 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:107
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":11,"skipped":64,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":4,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:37:11.624: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 41 lines ...
Jan 17 13:38:00.043: INFO: Pod exec-volume-test-preprovisionedpv-r96r no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-r96r
Jan 17 13:38:00.043: INFO: Deleting pod "exec-volume-test-preprovisionedpv-r96r" in namespace "volume-328"
STEP: Deleting pv and pvc
Jan 17 13:38:00.221: INFO: Deleting PersistentVolumeClaim "pvc-b4zxp"
Jan 17 13:38:00.473: INFO: Deleting PersistentVolume "gcepd-bg58g"
Jan 17 13:38:02.159: INFO: error deleting PD "bootstrap-e2e-b5b8fcb4-b753-49f6-a265-a6ac0e2487d8": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-b5b8fcb4-b753-49f6-a265-a6ac0e2487d8' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-cksd', resourceInUseByAnotherResource
Jan 17 13:38:02.159: INFO: Couldn't delete PD "bootstrap-e2e-b5b8fcb4-b753-49f6-a265-a6ac0e2487d8", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-b5b8fcb4-b753-49f6-a265-a6ac0e2487d8' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-cksd', resourceInUseByAnotherResource
Jan 17 13:38:08.409: INFO: error deleting PD "bootstrap-e2e-b5b8fcb4-b753-49f6-a265-a6ac0e2487d8": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-b5b8fcb4-b753-49f6-a265-a6ac0e2487d8' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-cksd', resourceInUseByAnotherResource
Jan 17 13:38:08.409: INFO: Couldn't delete PD "bootstrap-e2e-b5b8fcb4-b753-49f6-a265-a6ac0e2487d8", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-b5b8fcb4-b753-49f6-a265-a6ac0e2487d8' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-cksd', resourceInUseByAnotherResource
Jan 17 13:38:15.584: INFO: Successfully deleted PD "bootstrap-e2e-b5b8fcb4-b753-49f6-a265-a6ac0e2487d8".
Jan 17 13:38:15.584: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:38:15.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-328" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":25,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:38:16.389: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 75 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":14,"skipped":80,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:38:17.013: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:38:17.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 169 lines ...
• [SLOW TEST:13.286 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:38:21.249: INFO: Driver gluster doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:38:21.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 104 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":8,"skipped":41,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:37:34.717: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-95
... skipping 60 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and write from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:233
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":9,"skipped":41,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 26893 lines ...
• [SLOW TEST:7.550 seconds]
[sig-instrumentation] MetricsGrabber
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/common/framework.go:23
  should grab all metrics from API server.
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/monitoring/metrics_grabber.go:46
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","total":-1,"completed":14,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:46:18.084: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:46:18.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 157 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (ext3)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume","total":-1,"completed":19,"skipped":85,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:46:18.582: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:46:18.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 33 lines ...
STEP: looking for the results for each expected name from probers
Jan 17 13:45:58.223: INFO: Unable to read wheezy_udp@dns-test-service.dns-2876.svc.cluster.local from pod dns-2876/dns-test-134af574-f99a-4d1f-8e3c-9c77222b2b06: the server could not find the requested resource (get pods dns-test-134af574-f99a-4d1f-8e3c-9c77222b2b06)
Jan 17 13:45:58.550: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2876.svc.cluster.local from pod dns-2876/dns-test-134af574-f99a-4d1f-8e3c-9c77222b2b06: the server could not find the requested resource (get pods dns-test-134af574-f99a-4d1f-8e3c-9c77222b2b06)
Jan 17 13:45:59.087: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2876.svc.cluster.local from pod dns-2876/dns-test-134af574-f99a-4d1f-8e3c-9c77222b2b06: the server could not find the requested resource (get pods dns-test-134af574-f99a-4d1f-8e3c-9c77222b2b06)
Jan 17 13:45:59.428: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2876.svc.cluster.local from pod dns-2876/dns-test-134af574-f99a-4d1f-8e3c-9c77222b2b06: the server could not find the requested resource (get pods dns-test-134af574-f99a-4d1f-8e3c-9c77222b2b06)
Jan 17 13:46:02.335: INFO: Unable to read jessie_tcp@dns-test-service.dns-2876.svc.cluster.local from pod dns-2876/dns-test-134af574-f99a-4d1f-8e3c-9c77222b2b06: the server could not find the requested resource (get pods dns-test-134af574-f99a-4d1f-8e3c-9c77222b2b06)
Jan 17 13:46:04.993: INFO: Lookups using dns-2876/dns-test-134af574-f99a-4d1f-8e3c-9c77222b2b06 failed for: [wheezy_udp@dns-test-service.dns-2876.svc.cluster.local wheezy_tcp@dns-test-service.dns-2876.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2876.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2876.svc.cluster.local jessie_tcp@dns-test-service.dns-2876.svc.cluster.local]

Jan 17 13:46:16.331: INFO: DNS probes using dns-2876/dns-test-134af574-f99a-4d1f-8e3c-9c77222b2b06 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 6 lines ...
• [SLOW TEST:64.318 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":13,"skipped":59,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 43 lines ...
STEP: Deleting the previously created pod
Jan 17 13:45:51.891: INFO: Deleting pod "pvc-volume-tester-j25vp" in namespace "csi-mock-volumes-8241"
Jan 17 13:45:52.244: INFO: Wait up to 5m0s for pod "pvc-volume-tester-j25vp" to be fully deleted
STEP: Checking CSI driver logs
Jan 17 13:45:59.469: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8241","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8241","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8241","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-8241","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-f13caee9-ac20-4ef7-8e60-5824a7737e56","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-f13caee9-ac20-4ef7-8e60-5824a7737e56"}}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-8241","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-f13caee9-ac20-4ef7-8e60-5824a7737e56","storage.kubernetes.io/csiProvisionerIdentity":"1579268718755-8081-csi-mock-csi-mock-volumes-8241"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f13caee9-ac20-4ef7-8e60-5824a7737e56/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-f13caee9-ac20-4ef7-8e60-5824a7737e56","storage.kubernetes.io/csiProvisionerIdentity":"1579268718755-8081-csi-mock-csi-mock-volumes-8241"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f13caee9-ac20-4ef7-8e60-5824a7737e56/globalmount","target_path":"/var/lib/kubelet/pods/802da6ff-e8c4-42ad-8bfe-c556789deed4/volumes/kubernetes.io~csi/pvc-f13caee9-ac20-4ef7-8e60-5824a7737e56/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-f13caee9-ac20-4ef7-8e60-5824a7737e56","storage.kubernetes.io/csiProvisionerIdentity":"1579268718755-8081-csi-mock-csi-mock-volumes-8241"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"4","volume_path":"/var/lib/kubelet/pods/802da6ff-e8c4-42ad-8bfe-c556789deed4/volumes/kubernetes.io~csi/pvc-f13caee9-ac20-4ef7-8e60-5824a7737e56/mount"},"Response":{"usage":[{"total":1073741824,"unit":1}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/802da6ff-e8c4-42ad-8bfe-c556789deed4/volumes/kubernetes.io~csi/pvc-f13caee9-ac20-4ef7-8e60-5824a7737e56/mount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f13caee9-ac20-4ef7-8e60-5824a7737e56/globalmount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerUnpublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-8241"},"Response":{},"Error":""}

Jan 17 13:45:59.469: INFO: Found NodeUnpublishVolume: {Method:/csi.v1.Node/NodeUnpublishVolume Request:{VolumeContext:map[]}}
STEP: Deleting pod pvc-volume-tester-j25vp
Jan 17 13:45:59.469: INFO: Deleting pod "pvc-volume-tester-j25vp" in namespace "csi-mock-volumes-8241"
STEP: Deleting claim pvc-czz5t
Jan 17 13:46:00.332: INFO: Waiting up to 2m0s for PersistentVolume pvc-f13caee9-ac20-4ef7-8e60-5824a7737e56 to get deleted
... skipping 38 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:296
    should not be passed when CSIDriver does not exist
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:346
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":20,"skipped":116,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:45:32.984: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 60 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":21,"skipped":116,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:46:26.308: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:46:26.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 85 lines ...
• [SLOW TEST:44.858 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":9,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:46:31.937: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:46:31.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 152 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":18,"skipped":108,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 131 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":16,"skipped":80,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:45:44.535: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 57 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":18,"skipped":92,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":17,"skipped":80,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:46:32.285: INFO: Distro gci doesn't support ntfs -- skipping
... skipping 89 lines ...
      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":15,"skipped":126,"failed":0}
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:46:22.142: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-8907
... skipping 23 lines ...
• [SLOW TEST:15.237 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:90
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":16,"skipped":126,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:46:37.393: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 56 lines ...
• [SLOW TEST:26.994 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":12,"skipped":60,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:46:37.771: INFO: Driver gluster doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:46:37.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 220 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a container with runAsNonRoot
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:98
    should run with an image specified user ID
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:145
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":15,"skipped":67,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:46:38.858: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 119 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":18,"skipped":75,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:45:44.511: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-2171
... skipping 66 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and read from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":19,"skipped":75,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:46:41.690: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:46:41.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 105 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [k8s.io] GlusterDynamicProvisioner
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should create and delete persistent volumes [fast]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:747
------------------------------
{"msg":"PASSED [sig-storage] Dynamic Provisioning [k8s.io] GlusterDynamicProvisioner should create and delete persistent volumes [fast]","total":-1,"completed":22,"skipped":170,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:46:45.277: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:46:45.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 43 lines ...
• [SLOW TEST:14.585 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":109,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:46:46.703: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 135 lines ...
• [SLOW TEST:10.481 seconds]
[k8s.io] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":77,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 188 lines ...
• [SLOW TEST:153.601 seconds]
[k8s.io] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":80,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:46:53.957: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 65 lines ...
• [SLOW TEST:15.780 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update PodDisruptionBudget status
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:63
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should update PodDisruptionBudget status","total":-1,"completed":18,"skipped":91,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 395 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":13,"skipped":64,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":25,"skipped":127,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:46:54.515: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:46:54.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 128 lines ...
      Driver nfs doesn't support ext3 -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:153
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":18,"skipped":114,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:45:57.241: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 81 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":19,"skipped":114,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:46:55.829: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:46:55.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 209 lines ...
• [SLOW TEST:48.535 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":18,"skipped":100,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:47:06.526: INFO: Distro gci doesn't support ntfs -- skipping
... skipping 40 lines ...
• [SLOW TEST:25.238 seconds]
[k8s.io] InitContainer [NodeConformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should invoke init containers on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":20,"skipped":85,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:47:06.952: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 59 lines ...
      Driver supports dynamic provisioning, skipping InlineVolume pattern

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:688
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":16,"skipped":52,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:93
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
... skipping 53 lines ...
Jan 17 13:46:05.115: INFO: PersistentVolumeClaim csi-hostpathlvxsr found but phase is Pending instead of Bound.
Jan 17 13:46:07.306: INFO: PersistentVolumeClaim csi-hostpathlvxsr found but phase is Pending instead of Bound.
Jan 17 13:46:09.644: INFO: PersistentVolumeClaim csi-hostpathlvxsr found but phase is Pending instead of Bound.
Jan 17 13:46:11.806: INFO: PersistentVolumeClaim csi-hostpathlvxsr found and phase=Bound (11.356853257s)
STEP: Expanding non-expandable pvc
Jan 17 13:46:12.640: INFO: currentPvcSize {{1048576 0} {<nil>} 1Mi BinarySI}, newSize {{1074790400 0} {<nil>}  BinarySI}
Jan 17 13:46:13.148: INFO: Error updating pvc csi-hostpathlvxsr: persistentvolumeclaims "csi-hostpathlvxsr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:46:15.874: INFO: Error updating pvc csi-hostpathlvxsr: persistentvolumeclaims "csi-hostpathlvxsr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:46:17.961: INFO: Error updating pvc csi-hostpathlvxsr: persistentvolumeclaims "csi-hostpathlvxsr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:46:19.902: INFO: Error updating pvc csi-hostpathlvxsr: persistentvolumeclaims "csi-hostpathlvxsr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:46:22.220: INFO: Error updating pvc csi-hostpathlvxsr: persistentvolumeclaims "csi-hostpathlvxsr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:46:23.965: INFO: Error updating pvc csi-hostpathlvxsr: persistentvolumeclaims "csi-hostpathlvxsr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:46:25.754: INFO: Error updating pvc csi-hostpathlvxsr: persistentvolumeclaims "csi-hostpathlvxsr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:46:28.550: INFO: Error updating pvc csi-hostpathlvxsr: persistentvolumeclaims "csi-hostpathlvxsr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:46:30.241: INFO: Error updating pvc csi-hostpathlvxsr: persistentvolumeclaims "csi-hostpathlvxsr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:46:31.840: INFO: Error updating pvc csi-hostpathlvxsr: persistentvolumeclaims "csi-hostpathlvxsr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:46:34.217: INFO: Error updating pvc csi-hostpathlvxsr: persistentvolumeclaims "csi-hostpathlvxsr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:46:35.717: INFO: Error updating pvc csi-hostpathlvxsr: persistentvolumeclaims "csi-hostpathlvxsr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:46:37.965: INFO: Error updating pvc csi-hostpathlvxsr: persistentvolumeclaims "csi-hostpathlvxsr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:46:39.299: INFO: Error updating pvc csi-hostpathlvxsr: persistentvolumeclaims "csi-hostpathlvxsr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:46:41.652: INFO: Error updating pvc csi-hostpathlvxsr: persistentvolumeclaims "csi-hostpathlvxsr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:46:43.421: INFO: Error updating pvc csi-hostpathlvxsr: persistentvolumeclaims "csi-hostpathlvxsr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 17 13:46:43.690: INFO: Error updating pvc csi-hostpathlvxsr: persistentvolumeclaims "csi-hostpathlvxsr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Jan 17 13:46:43.690: INFO: Deleting PersistentVolumeClaim "csi-hostpathlvxsr"
Jan 17 13:46:43.990: INFO: Waiting up to 5m0s for PersistentVolume pvc-bf57f957-7258-443d-90cb-1ac4fd56293f to get deleted
Jan 17 13:46:44.334: INFO: PersistentVolume pvc-bf57f957-7258-443d-90cb-1ac4fd56293f found and phase=Bound (343.794077ms)
Jan 17 13:46:49.411: INFO: PersistentVolume pvc-bf57f957-7258-443d-90cb-1ac4fd56293f was removed
STEP: Deleting sc
... skipping 46 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] volume-expand
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:147
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":17,"skipped":52,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:47:07.345: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 456 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:43
    volume on default medium should have the correct mode using FSGroup
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:66
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":19,"skipped":93,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:47:12.570: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 6 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 301 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support inline execution and attach","total":-1,"completed":16,"skipped":76,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:46:06.587: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 68 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":17,"skipped":76,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:47:16.029: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:47:16.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 187 lines ...
STEP: cleaning the environment after gcepd
Jan 17 13:46:49.272: INFO: Deleting pod "gcepd-client" in namespace "volume-2421"
Jan 17 13:46:49.422: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
STEP: Deleting pv and pvc
Jan 17 13:47:10.151: INFO: Deleting PersistentVolumeClaim "pvc-g4vd6"
Jan 17 13:47:10.949: INFO: Deleting PersistentVolume "gcepd-95gpp"
Jan 17 13:47:12.870: INFO: error deleting PD "bootstrap-e2e-df6850ad-5314-4cca-8f2d-e423d1455062": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-df6850ad-5314-4cca-8f2d-e423d1455062' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-l1kf', resourceInUseByAnotherResource
Jan 17 13:47:12.870: INFO: Couldn't delete PD "bootstrap-e2e-df6850ad-5314-4cca-8f2d-e423d1455062", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-df6850ad-5314-4cca-8f2d-e423d1455062' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-l1kf', resourceInUseByAnotherResource
Jan 17 13:47:20.000: INFO: Successfully deleted PD "bootstrap-e2e-df6850ad-5314-4cca-8f2d-e423d1455062".
Jan 17 13:47:20.000: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:47:20.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-2421" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":25,"skipped":198,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:47:20.748: INFO: Driver cinder doesn't support ext4 -- skipping
... skipping 101 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume at the same time
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:243
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":18,"skipped":72,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:47:21.208: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 145 lines ...
• [SLOW TEST:17.496 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":73,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: inline ephemeral CSI volume] ephemeral
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: inline ephemeral CSI volume] ephemeral
... skipping 131 lines ...
• [SLOW TEST:14.887 seconds]
[sig-storage] Projected configMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":83,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 123 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1895
    should update a single-container pod's image  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":20,"skipped":119,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 18 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:47:34.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-3132" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":19,"skipped":84,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:47:35.541: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 90 lines ...
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":10,"skipped":57,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:89.158 seconds]
[sig-storage] Projected secret
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":135,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:47:40.210: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 87 lines ...
• [SLOW TEST:9.408 seconds]
[k8s.io] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":206,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:47:42.194: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:47:42.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 34 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332

      Driver emptydir doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":20,"skipped":72,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:46:30.919: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 77 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":21,"skipped":72,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:47:47.176: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:47:47.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 118 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume at the same time
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:243
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":19,"skipped":105,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 92 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should create and stop a replication controller  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":17,"skipped":98,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:47:55.604: INFO: Distro gci doesn't support ntfs -- skipping
... skipping 45 lines ...
• [SLOW TEST:20.832 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":21,"skipped":127,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:47:56.244: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:47:56.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 56 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:444
    that expects a client request
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:445
      should support a client that connects, sends DATA, and disconnects
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:449
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":20,"skipped":90,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:47:56.344: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 15 lines ...
      Driver gluster doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer =\u003e should not allow an eviction","total":-1,"completed":23,"skipped":191,"failed":0}
[BeforeEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:47:01.879: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-2540
... skipping 42 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":191,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:47:56.950: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:47:56.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 57 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:56
  NFSv4
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:74
    should be mountable for NFSv4
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:75
------------------------------
{"msg":"PASSED [sig-storage] GCP Volumes NFSv4 should be mountable for NFSv4","total":-1,"completed":12,"skipped":90,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 57 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":22,"skipped":117,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:47:59.375: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 87 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":26,"skipped":131,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:47:59.829: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:47:59.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 80 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152

      Driver local doesn't support ext3 -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:153
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":17,"skipped":134,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:46:50.782: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-5059
... skipping 60 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and read from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":18,"skipped":134,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Zone Support
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 109 lines ...
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":14,"skipped":60,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:48:03.236: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:48:03.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 68 lines ...
• [SLOW TEST:15.074 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:56
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":20,"skipped":106,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 23 lines ...
• [SLOW TEST:29.389 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster [Provider:GCE]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:65
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Provider:GCE]","total":-1,"completed":11,"skipped":58,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:48:04.941: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
... skipping 37 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    getting/updating/patching custom resource definition status sub-resource works  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":13,"skipped":91,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:48:07.018: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:48:07.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 359 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl apply
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:922
    apply set/view last-applied
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:959
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":14,"skipped":78,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":22,"skipped":128,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:48:09.342: INFO: Driver local doesn't support ext3 -- skipping
... skipping 106 lines ...
• [SLOW TEST:14.882 seconds]
[k8s.io] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":144,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:48:14.738: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 61 lines ...
      Driver supports dynamic provisioning, skipping InlineVolume pattern

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:688
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":18,"skipped":103,"failed":0}
[BeforeEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:47:11.726: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-8870
... skipping 91 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":20,"skipped":123,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:48:17.748: INFO: Distro gci doesn't support ntfs -- skipping
... skipping 41 lines ...
Jan 17 13:47:37.962: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:47:38.357: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:47:39.472: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:47:39.691: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:47:39.942: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:47:40.144: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:47:40.502: INFO: Lookups using dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3030.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3030.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local jessie_udp@dns-test-service-2.dns-3030.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3030.svc.cluster.local]

Jan 17 13:47:46.285: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:47:46.704: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:47:47.069: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:47:47.529: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:47:48.759: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:47:49.106: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:47:49.416: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:47:49.702: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:47:50.341: INFO: Lookups using dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3030.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3030.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local jessie_udp@dns-test-service-2.dns-3030.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3030.svc.cluster.local]

Jan 17 13:47:51.321: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:47:51.891: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:47:52.286: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:47:52.541: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:47:53.149: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:47:53.494: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:47:54.401: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:47:55.030: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:47:56.087: INFO: Lookups using dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3030.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3030.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local jessie_udp@dns-test-service-2.dns-3030.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3030.svc.cluster.local]

Jan 17 13:48:00.871: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:48:01.227: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:48:01.608: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:48:01.932: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3030.svc.cluster.local from pod dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58: the server could not find the requested resource (get pods dns-test-3ebce469-4867-4b4f-830e-3bef66934c58)
Jan 17 13:48:07.938: INFO: Lookups using dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3030.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3030.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3030.svc.cluster.local]

Jan 17 13:48:16.464: INFO: DNS probes using dns-3030/dns-test-3ebce469-4867-4b4f-830e-3bef66934c58 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 5 lines ...
• [SLOW TEST:55.029 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":19,"skipped":75,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:15.859 seconds]
[k8s.io] InitContainer [NodeConformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should invoke init containers on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":12,"skipped":60,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":24,"skipped":122,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:46:38.381: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 124 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":25,"skipped":122,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:68.624 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":20,"skipped":102,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":11,"skipped":62,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:47:14.060: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 72 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:48:21.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":21,"skipped":106,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:48:21.749: INFO: Driver azure-disk doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:48:21.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 42 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":92,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:48:21.786: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:48:21.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 12 lines ...
      Driver hostPath doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":19,"skipped":105,"failed":0}
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:48:08.604: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3184
... skipping 23 lines ...
• [SLOW TEST:14.018 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":105,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:48:22.623: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:48:22.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 159 lines ...
Jan 17 13:47:52.526: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 13:47:54.509: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/0] Namespace:volume-1966 PodName:gcepd-client ContainerName:gcepd-client Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 17 13:47:54.509: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: cleaning the environment after gcepd
Jan 17 13:47:56.940: INFO: Deleting pod "gcepd-client" in namespace "volume-1966"
Jan 17 13:47:57.557: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
Jan 17 13:48:09.493: INFO: error deleting PD "bootstrap-e2e-c994887e-aae9-4aa0-9e3d-cec59f3bc777": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-c994887e-aae9-4aa0-9e3d-cec59f3bc777' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-l1kf', resourceInUseByAnotherResource
Jan 17 13:48:09.493: INFO: Couldn't delete PD "bootstrap-e2e-c994887e-aae9-4aa0-9e3d-cec59f3bc777", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-c994887e-aae9-4aa0-9e3d-cec59f3bc777' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-l1kf', resourceInUseByAnotherResource
Jan 17 13:48:15.603: INFO: error deleting PD "bootstrap-e2e-c994887e-aae9-4aa0-9e3d-cec59f3bc777": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-c994887e-aae9-4aa0-9e3d-cec59f3bc777' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-l1kf', resourceInUseByAnotherResource
Jan 17 13:48:15.603: INFO: Couldn't delete PD "bootstrap-e2e-c994887e-aae9-4aa0-9e3d-cec59f3bc777", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/disks/bootstrap-e2e-c994887e-aae9-4aa0-9e3d-cec59f3bc777' is already being used by 'projects/k8s-jkns-gce-ubuntu-1-6-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-l1kf', resourceInUseByAnotherResource
Jan 17 13:48:22.766: INFO: Successfully deleted PD "bootstrap-e2e-c994887e-aae9-4aa0-9e3d-cec59f3bc777".
Jan 17 13:48:22.766: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:48:22.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-1966" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":20,"skipped":86,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:48:23.706: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:48:23.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 131 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":21,"skipped":102,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:48:26.230: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:48:26.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 40 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with readOnlyRootFilesystem
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:165
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":12,"skipped":62,"failed":0}
[BeforeEach] [k8s.io] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:48:21.437: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-6538
... skipping 21 lines ...
• [SLOW TEST:12.611 seconds]
[k8s.io] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":62,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:48:34.050: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:48:34.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 48 lines ...
• [SLOW TEST:13.897 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":125,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:48:35.095: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:48:35.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 59 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command that always fails in a pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":87,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:48:36.930: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:48:36.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 141 lines ...
• [SLOW TEST:15.752 seconds]
[sig-storage] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":107,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:48:37.506: INFO: Driver azure-disk doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:48:37.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 10 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392

      Driver azure-disk doesn't support ntfs -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:153
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":103,"failed":0}
[BeforeEach] [k8s.io] [sig-node] PreStop
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:48:15.158: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in prestop-5252
... skipping 31 lines ...
• [SLOW TEST:26.342 seconds]
[k8s.io] [sig-node] PreStop
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should call prestop when killing a pod  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":20,"skipped":103,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 155 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":-1,"completed":22,"skipped":100,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:48:46.834: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 6 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":19,"skipped":131,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:47:28.748: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in csi-mock-volumes-544
... skipping 38 lines ...
Jan 17 13:47:45.609: INFO: PersistentVolumeClaim pvc-s8jzm found but phase is Pending instead of Bound.
Jan 17 13:47:47.927: INFO: PersistentVolumeClaim pvc-s8jzm found but phase is Pending instead of Bound.
Jan 17 13:47:50.353: INFO: PersistentVolumeClaim pvc-s8jzm found but phase is Pending instead of Bound.
Jan 17 13:47:52.622: INFO: PersistentVolumeClaim pvc-s8jzm found but phase is Pending instead of Bound.
Jan 17 13:47:55.024: INFO: PersistentVolumeClaim pvc-s8jzm found and phase=Bound (14.378943966s)
STEP: checking for CSIInlineVolumes feature
Jan 17 13:48:19.446: INFO: Error getting logs for pod csi-inline-volume-bbx9x: the server rejected our request for an unknown reason (get pods csi-inline-volume-bbx9x)
STEP: Deleting pod csi-inline-volume-bbx9x in namespace csi-mock-volumes-544
STEP: Deleting the previously created pod
Jan 17 13:48:22.451: INFO: Deleting pod "pvc-volume-tester-cntfq" in namespace "csi-mock-volumes-544"
Jan 17 13:48:22.831: INFO: Wait up to 5m0s for pod "pvc-volume-tester-cntfq" to be fully deleted
STEP: Checking CSI driver logs
Jan 17 13:48:32.420: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-544","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-544","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-544","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-544","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-194af8b2-1abd-4d91-895e-780b32de24d7","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-194af8b2-1abd-4d91-895e-780b32de24d7"}}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-544","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-194af8b2-1abd-4d91-895e-780b32de24d7","storage.kubernetes.io/csiProvisionerIdentity":"1579268871631-8081-csi-mock-csi-mock-volumes-544"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-194af8b2-1abd-4d91-895e-780b32de24d7/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-194af8b2-1abd-4d91-895e-780b32de24d7","storage.kubernetes.io/csiProvisionerIdentity":"1579268871631-8081-csi-mock-csi-mock-volumes-544"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-194af8b2-1abd-4d91-895e-780b32de24d7/globalmount","target_path":"/var/lib/kubelet/pods/1b45b61b-a629-4f4b-baf3-49b0b71b6120/volumes/kubernetes.io~csi/pvc-194af8b2-1abd-4d91-895e-780b32de24d7/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"pvc-volume-tester-cntfq","csi.storage.k8s.io/pod.namespace":"csi-mock-volumes-544","csi.storage.k8s.io/pod.uid":"1b45b61b-a629-4f4b-baf3-49b0b71b6120","csi.storage.k8s.io/serviceAccount.name":"default","name":"pvc-194af8b2-1abd-4d91-895e-780b32de24d7","storage.kubernetes.io/csiProvisionerIdentity":"1579268871631-8081-csi-mock-csi-mock-volumes-544"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/1b45b61b-a629-4f4b-baf3-49b0b71b6120/volumes/kubernetes.io~csi/pvc-194af8b2-1abd-4d91-895e-780b32de24d7/mount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-194af8b2-1abd-4d91-895e-780b32de24d7/globalmount"},"Response":{},"Error":""}

Jan 17 13:48:32.420: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Jan 17 13:48:32.420: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-cntfq
Jan 17 13:48:32.420: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-544
Jan 17 13:48:32.420: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 1b45b61b-a629-4f4b-baf3-49b0b71b6120
Jan 17 13:48:32.420: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false
... skipping 43 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:296
    should be passed when podInfoOnMount=true
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:346
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":20,"skipped":131,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:48:48.226: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:48:48.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 158 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":18,"skipped":101,"failed":0}

SSS
------------------------------
[BeforeEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 23 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":127,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:49:01.267: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:49:01.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 218 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:43
    nonexistent volume subPath should have the correct mode and owner using FSGroup
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:58
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":23,"skipped":147,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 14 lines ...
• [SLOW TEST:65.520 seconds]
[sig-api-machinery] Watchers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":17,"skipped":105,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:46:56.108: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-9163
... skipping 14 lines ...
• [SLOW TEST:126.886 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":18,"skipped":105,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:49:02.997: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 54 lines ...
• [SLOW TEST:22.215 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":105,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:49:03.724: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 96 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":21,"skipped":122,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 50 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":13,"skipped":63,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Flexvolumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 333 lines ...
Jan 17 13:48:44.989: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: cleaning the environment after flex
Jan 17 13:48:47.018: INFO: Deleting pod "flex-client" in namespace "flexvolume-3814"
Jan 17 13:48:47.105: INFO: Wait up to 5m0s for pod "flex-client" to be fully deleted
STEP: waiting for flex client pod to terminate
Jan 17 13:48:59.700: INFO: Waiting up to 5m0s for pod "flex-client" in namespace "flexvolume-3814" to be "terminated due to deadline exceeded"
Jan 17 13:49:00.049: INFO: Pod "flex-client" in namespace "flexvolume-3814" not found. Error: pods "flex-client" not found
STEP: uninstalling flexvolume dummy-attachable-flexvolume-3814 from node bootstrap-e2e-minion-group-hs9p
Jan 17 13:49:10.049: INFO: Getting external IP address for bootstrap-e2e-minion-group-hs9p
Jan 17 13:49:10.583: INFO: ssh prow@35.247.71.240:22: command:   sudo rm -r /home/kubernetes/flexvolume/k8s~dummy-attachable-flexvolume-3814
Jan 17 13:49:10.583: INFO: ssh prow@35.247.71.240:22: stdout:    ""
Jan 17 13:49:10.583: INFO: ssh prow@35.247.71.240:22: stderr:    ""
Jan 17 13:49:10.583: INFO: ssh prow@35.247.71.240:22: exit code: 0
... skipping 11 lines ...
• [SLOW TEST:62.018 seconds]
[sig-storage] Flexvolumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should be mountable when attachable
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:205
------------------------------
{"msg":"PASSED [sig-storage] Flexvolumes should be mountable when attachable","total":-1,"completed":15,"skipped":80,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 149 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:63
[It] should support unsafe sysctls which are actually whitelisted
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:110
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:10.136 seconds]
[k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support unsafe sysctls which are actually whitelisted
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:110
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted","total":-1,"completed":28,"skipped":147,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:49:11.447: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:49:11.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 32 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:49:11.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2163" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":-1,"completed":14,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:49:11.633: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:49:11.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 58 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:173

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":130,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:48:27.015: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 48 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":22,"skipped":130,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:49:11.809: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:49:11.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 268 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and write from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:233
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":23,"skipped":128,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path"]}

SSSSSS
------------------------------
[BeforeEach] [k8s.io] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 23 lines ...
• [SLOW TEST:8.783 seconds]
[k8s.io] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":109,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:49:12.520: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 171 lines ...
• [SLOW TEST:91.747 seconds]
[sig-auth] PodSecurityPolicy
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow pods under the privileged policy.PodSecurityPolicy
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:101
------------------------------
{"msg":"PASSED [sig-auth] PodSecurityPolicy should allow pods under the privileged policy.PodSecurityPolicy","total":-1,"completed":27,"skipped":211,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:49:13.955: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 36 lines ...
STEP: Destroying namespace "services-7670" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

•
------------------------------
{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces","total":-1,"completed":16,"skipped":87,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 25 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 17 13:48:40.073: INFO: File wheezy_udp@dns-test-service-3.dns-7632.svc.cluster.local from pod  dns-7632/dns-test-a3d6d371-5778-4f2c-a800-f6301299863e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 17 13:48:40.467: INFO: File jessie_udp@dns-test-service-3.dns-7632.svc.cluster.local from pod  dns-7632/dns-test-a3d6d371-5778-4f2c-a800-f6301299863e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 17 13:48:40.468: INFO: Lookups using dns-7632/dns-test-a3d6d371-5778-4f2c-a800-f6301299863e failed for: [wheezy_udp@dns-test-service-3.dns-7632.svc.cluster.local jessie_udp@dns-test-service-3.dns-7632.svc.cluster.local]

Jan 17 13:48:45.750: INFO: File wheezy_udp@dns-test-service-3.dns-7632.svc.cluster.local from pod  dns-7632/dns-test-a3d6d371-5778-4f2c-a800-f6301299863e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 17 13:48:45.943: INFO: File jessie_udp@dns-test-service-3.dns-7632.svc.cluster.local from pod  dns-7632/dns-test-a3d6d371-5778-4f2c-a800-f6301299863e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 17 13:48:45.943: INFO: Lookups using dns-7632/dns-test-a3d6d371-5778-4f2c-a800-f6301299863e failed for: [wheezy_udp@dns-test-service-3.dns-7632.svc.cluster.local jessie_udp@dns-test-service-3.dns-7632.svc.cluster.local]

Jan 17 13:48:51.143: INFO: File wheezy_udp@dns-test-service-3.dns-7632.svc.cluster.local from pod  dns-7632/dns-test-a3d6d371-5778-4f2c-a800-f6301299863e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 17 13:48:51.644: INFO: File jessie_udp@dns-test-service-3.dns-7632.svc.cluster.local from pod  dns-7632/dns-test-a3d6d371-5778-4f2c-a800-f6301299863e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 17 13:48:51.644: INFO: Lookups using dns-7632/dns-test-a3d6d371-5778-4f2c-a800-f6301299863e failed for: [wheezy_udp@dns-test-service-3.dns-7632.svc.cluster.local jessie_udp@dns-test-service-3.dns-7632.svc.cluster.local]

Jan 17 13:48:56.430: INFO: File wheezy_udp@dns-test-service-3.dns-7632.svc.cluster.local from pod  dns-7632/dns-test-a3d6d371-5778-4f2c-a800-f6301299863e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 17 13:48:56.677: INFO: Lookups using dns-7632/dns-test-a3d6d371-5778-4f2c-a800-f6301299863e failed for: [wheezy_udp@dns-test-service-3.dns-7632.svc.cluster.local]

Jan 17 13:49:01.009: INFO: DNS probes using dns-test-a3d6d371-5778-4f2c-a800-f6301299863e succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7632.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7632.svc.cluster.local; sleep 1; done
... skipping 85 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":15,"skipped":94,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:49:19.535: INFO: Only supported for providers [aws] (not gce)
... skipping 44 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a container with runAsNonRoot
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:98
    should run with an explicit non-root user ID [LinuxOnly]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:123
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":23,"skipped":154,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:49:21.756: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 117 lines ...
• [SLOW TEST:34.237 seconds]
[sig-api-machinery] Servers with support for API chunking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should return chunks of results for list calls
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/chunking.go:77
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","total":-1,"completed":21,"skipped":136,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 45 lines ...
• [SLOW TEST:14.477 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":17,"skipped":90,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:49:28.678: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 88 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":23,"skipped":108,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:49:30.693: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:49:30.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 68 lines ...
• [SLOW TEST:29.501 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: enough pods, replicaSet, percentage => should allow an eviction
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:151
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage =\u003e should allow an eviction","total":-1,"completed":24,"skipped":148,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:49:31.762: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:49:31.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 143 lines ...
• [SLOW TEST:14.339 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":162,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:49:36.103: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:49:36.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 34 lines ...
      Driver csi-hostpath doesn't support InlineVolume -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":25,"skipped":142,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:48:53.673: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-1902
... skipping 144 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":19,"skipped":135,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:49:40.815: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 39 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for pod-Service: udp","total":-1,"completed":21,"skipped":107,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:48:08.119: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-9352
... skipping 66 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume one after the other
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":22,"skipped":107,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 80 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":21,"skipped":111,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:49:43.856: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:49:43.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 150 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:525
    should support exec through kubectl proxy
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:619
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","total":-1,"completed":19,"skipped":104,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 95 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  Granular Checks: Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:161
    should function for pod-Service: http
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:163
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for pod-Service: http","total":-1,"completed":15,"skipped":63,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:49:52.294: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 113 lines ...
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kmx4m webserver-deployment-c7997dcc8- deployment-3009 /api/v1/namespaces/deployment-3009/pods/webserver-deployment-c7997dcc8-kmx4m 76c50ba3-6077-4e75-9220-820867f5fbfa 31576 0 2020-01-17 13:49:43 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f9507346-dcb5-41ab-9480-87c1c662a4d6 0xc000664090 0xc000664091}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4rjph,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4rjph,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4rjph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-mp1q,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.4,PodIP:,StartTime:2020-01-17 13:49:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 17 13:49:52.616: INFO: Pod "webserver-deployment-c7997dcc8-pwdxn" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pwdxn webserver-deployment-c7997dcc8- deployment-3009 /api/v1/namespaces/deployment-3009/pods/webserver-deployment-c7997dcc8-pwdxn e6749702-427b-4f31-ba9e-ac1fdf57a50a 31715 0 2020-01-17 13:49:48 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f9507346-dcb5-41ab-9480-87c1c662a4d6 0xc000664240 0xc000664241}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4rjph,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4rjph,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4rjph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-hs9p,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 17 13:49:52.616: INFO: Pod "webserver-deployment-c7997dcc8-rrs64" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rrs64 webserver-deployment-c7997dcc8- deployment-3009 /api/v1/namespaces/deployment-3009/pods/webserver-deployment-c7997dcc8-rrs64 80421939-1651-41f5-a055-6db6fc70e350 31783 0 2020-01-17 13:49:48 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f9507346-dcb5-41ab-9480-87c1c662a4d6 0xc0006643f0 0xc0006643f1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4rjph,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4rjph,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4rjph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-mp1q,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.4,PodIP:,StartTime:2020-01-17 13:49:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 17 13:49:52.616: INFO: Pod "webserver-deployment-c7997dcc8-s47gg" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-s47gg webserver-deployment-c7997dcc8- deployment-3009 /api/v1/namespaces/deployment-3009/pods/webserver-deployment-c7997dcc8-s47gg ba9196d2-db09-4ae1-a2e4-3534dc902794 31790 0 2020-01-17 13:49:44 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f9507346-dcb5-41ab-9480-87c1c662a4d6 0xc0006646c0 0xc0006646c1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4rjph,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4rjph,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4rjph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-cksd,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.5,PodIP:10.64.2.19,StartTime:2020-01-17 13:49:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: pull access denied for webserver, repository does not exist or may require 'docker login': denied: requested access to the resource is denied,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.2.19,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 17 13:49:52.617: INFO: Pod "webserver-deployment-c7997dcc8-s5fss" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-s5fss webserver-deployment-c7997dcc8- deployment-3009 /api/v1/namespaces/deployment-3009/pods/webserver-deployment-c7997dcc8-s5fss 2275b997-95d9-4b96-8bb5-a8e419485b38 31816 0 2020-01-17 13:49:48 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f9507346-dcb5-41ab-9480-87c1c662a4d6 0xc000664910 0xc000664911}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4rjph,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4rjph,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4rjph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-hs9p,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.3,PodIP:,StartTime:2020-01-17 13:49:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 17 13:49:52.617: INFO: Pod "webserver-deployment-c7997dcc8-t8d28" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-t8d28 webserver-deployment-c7997dcc8- deployment-3009 /api/v1/namespaces/deployment-3009/pods/webserver-deployment-c7997dcc8-t8d28 b67a28d3-8e05-41ad-ad92-2dea0f673f7d 31554 0 2020-01-17 13:49:40 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f9507346-dcb5-41ab-9480-87c1c662a4d6 0xc000664b50 0xc000664b51}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4rjph,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4rjph,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4rjph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-mp1q,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.4,PodIP:,StartTime:2020-01-17 13:49:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 17 13:49:52.617: INFO: Pod "webserver-deployment-c7997dcc8-xgnd6" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xgnd6 webserver-deployment-c7997dcc8- deployment-3009 /api/v1/namespaces/deployment-3009/pods/webserver-deployment-c7997dcc8-xgnd6 f8ab0460-9cc2-4af7-83b8-49ce515c0acb 31811 0 2020-01-17 13:49:49 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f9507346-dcb5-41ab-9480-87c1c662a4d6 0xc000664d50 0xc000664d51}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4rjph,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4rjph,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4rjph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-l1kf,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-17 13:49:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.6,PodIP:,StartTime:2020-01-17 13:49:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 6 lines ...
• [SLOW TEST:41.863 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":29,"skipped":148,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:49:53.319: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:49:53.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 103 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":24,"skipped":134,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path"]}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:49:54.020: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:49:54.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 25 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver gluster doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 99 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":14,"skipped":64,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:49:56.230: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 22 lines ...
Jan 17 13:49:43.125: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-7459
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Jan 17 13:49:45.963: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 13:49:58.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7459" for this suite.


• [SLOW TEST:15.799 seconds]
[k8s.io] InitContainer [NodeConformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":23,"skipped":108,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
... skipping 97 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":15,"skipped":71,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:50:02.262: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 13 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:220

      Driver hostPath doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":157,"failed":0}
[BeforeEach] [k8s.io] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 13:49:46.228: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-4902
... skipping 24 lines ...
• [SLOW TEST:21.698 seconds]
[k8s.io] [sig-node] Security Context
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:68
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":26,"skipped":157,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 17 13:50:07.930: INFO: Driver local doesn't support ext4 -- skipping
... skipping 15 lines ...
      Driver local doesn't support ext4 -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:153
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watch