This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-01-16 08:32
Elapsed1h8m
Revisionmaster
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/aa5629a5-55ec-4ede-b931-cf8005648699/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/aa5629a5-55ec-4ede-b931-cf8005648699/targets/test

No Test Failures!


Error lines from build-log.txt

... skipping 612 lines ...
Trying to find master named 'bootstrap-e2e-master'
Looking for address 'bootstrap-e2e-master-ip'
Using master: bootstrap-e2e-master (external IP: 34.82.211.53; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

...............Kubernetes cluster created.
Cluster "kubernetes-jkns-e2e-gce-serial_bootstrap-e2e" set.
User "kubernetes-jkns-e2e-gce-serial_bootstrap-e2e" set.
Context "kubernetes-jkns-e2e-gce-serial_bootstrap-e2e" created.
Switched to context "kubernetes-jkns-e2e-gce-serial_bootstrap-e2e".
... skipping 27 lines ...
bootstrap-e2e-master              Ready,SchedulingDisabled   <none>   25s   v1.18.0-alpha.1.810+f437ff75d45517
bootstrap-e2e-minion-group-451g   Ready                      <none>   21s   v1.18.0-alpha.1.810+f437ff75d45517
bootstrap-e2e-minion-group-7fqk   Ready                      <none>   20s   v1.18.0-alpha.1.810+f437ff75d45517
bootstrap-e2e-minion-group-8mzr   Ready                      <none>   21s   v1.18.0-alpha.1.810+f437ff75d45517
bootstrap-e2e-minion-group-zb1j   Ready                      <none>   22s   v1.18.0-alpha.1.810+f437ff75d45517
Validate output:
NAME                 STATUS    MESSAGE             ERROR
etcd-1               Healthy   {"health":"true"}   
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
Cluster validation succeeded
Done, listing cluster services:
... skipping 77 lines ...
Copying 'kube-apiserver.log kube-apiserver-audit.log kube-scheduler.log kube-controller-manager.log etcd.log etcd-events.log glbc.log cluster-autoscaler.log kube-addon-manager.log fluentd.log kubelet.cov startupscript.log' from bootstrap-e2e-master

Specify --start=47059 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts/before'
Detecting nodes in the cluster
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
... skipping 13 lines ...
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=bootstrap-e2e-minion-group
NODE_NAMES=bootstrap-e2e-minion-group-451g bootstrap-e2e-minion-group-7fqk bootstrap-e2e-minion-group-8mzr bootstrap-e2e-minion-group-zb1j
Failures for bootstrap-e2e-minion-group (if any):
2020/01/16 09:05:36 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts/before' finished in 2m12.593192602s
2020/01/16 09:05:36 process.go:153: Running: ./hack/e2e-internal/e2e-status.sh
Project: kubernetes-jkns-e2e-gce-serial
... skipping 1189 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:05:56.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","total":-1,"completed":1,"skipped":3,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:05:56.767: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 229 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:06:00.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-787" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:06:01.331: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:06:01.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 167 lines ...
• [SLOW TEST:8.922 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:06:05.377: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:06:05.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 262 lines ...
• [SLOW TEST:9.662 seconds]
[sig-storage] HostPath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should support r/w [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:65
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SSS
------------------------------
[BeforeEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:11.141 seconds]
[k8s.io] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:06:07.709: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:06:07.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 101 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:06:07.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1506" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret","total":-1,"completed":1,"skipped":5,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:06:08.733: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 39 lines ...
• [SLOW TEST:12.496 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:06:09.022: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:06:09.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 74 lines ...
• [SLOW TEST:12.668 seconds]
[sig-storage] Projected secret
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:11.804 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":15,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:06:10.932: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 48 lines ...
• [SLOW TEST:14.620 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":15,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:06:11.168: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 109 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    creating/deleting custom resource definition objects works  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:06:15.114: INFO: Driver emptydir doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:06:15.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 83 lines ...
• [SLOW TEST:21.356 seconds]
[k8s.io] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be submitted and removed [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:06:17.810: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:06:17.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 21 lines ...
Jan 16 09:05:56.544: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
Jan 16 09:05:58.953: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled.
Jan 16 09:05:59.501: INFO: Found ClusterRoles; assuming RBAC is enabled.
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-4145
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:115
STEP: Looking for a node to schedule job pod
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:06:19.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4145" for this suite.


• [SLOW TEST:22.988 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:115
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":1,"skipped":10,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:06:19.542: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 160 lines ...
• [SLOW TEST:12.539 seconds]
[sig-storage] Projected secret
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] RuntimeClass
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 12 lines ...
• [SLOW TEST:6.855 seconds]
[sig-node] RuntimeClass
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:39
  should reject a Pod requesting a RuntimeClass with an unconfigured handler
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:47
------------------------------
{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler","total":-1,"completed":3,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 58 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:06:31.211: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:06:31.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 192 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage =\u003e should not allow an eviction","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [k8s.io] [sig-node] AppArmor
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:06:09.054: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename apparmor
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in apparmor-5075
... skipping 21 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  load AppArmor profiles
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31
    can disable an AppArmor profile, using unconfined
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined","total":-1,"completed":2,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 29 lines ...
• [SLOW TEST:21.104 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment reaping should cascade to its replica sets and pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods","total":-1,"completed":2,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:06:32.042: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:06:32.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 313 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":1,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:06:39.626: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:06:39.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 12 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0}
[BeforeEach] [sig-node] RuntimeClass
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:06:38.668: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename runtimeclass
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in runtimeclass-6297
... skipping 27 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-6246
STEP: Creating statefulset with conflicting port in namespace statefulset-6246
STEP: Waiting until pod test-pod will start running in namespace statefulset-6246
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6246
Jan 16 09:06:23.163: INFO: Observed stateful pod in namespace: statefulset-6246, name: ss-0, uid: 842494e7-8c65-46df-b40d-0a6b0ac2d16b, status phase: Failed. Waiting for statefulset controller to delete.
Jan 16 09:06:23.284: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6246
STEP: Removing pod with conflicting port in namespace statefulset-6246
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6246 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jan 16 09:06:31.373: INFO: Deleting all statefulset in ns statefulset-6246
... skipping 11 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Should recreate evicted statefulset [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
... skipping 15 lines ...
Jan 16 09:06:14.955: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(gcepd) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-expand-2364-gcepd-sck2pg4
STEP: creating a claim
Jan 16 09:06:15.243: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Expanding non-expandable pvc
Jan 16 09:06:16.076: INFO: currentPvcSize {{5368709120 0} {<nil>} 5Gi BinarySI}, newSize {{6442450944 0} {<nil>}  BinarySI}
Jan 16 09:06:16.888: INFO: Error updating pvc gcepdp5g9l: PersistentVolumeClaim "gcepdp5g9l" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:06:19.380: INFO: Error updating pvc gcepdp5g9l: PersistentVolumeClaim "gcepdp5g9l" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:06:21.569: INFO: Error updating pvc gcepdp5g9l: PersistentVolumeClaim "gcepdp5g9l" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:06:23.476: INFO: Error updating pvc gcepdp5g9l: PersistentVolumeClaim "gcepdp5g9l" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:06:25.565: INFO: Error updating pvc gcepdp5g9l: PersistentVolumeClaim "gcepdp5g9l" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:06:27.966: INFO: Error updating pvc gcepdp5g9l: PersistentVolumeClaim "gcepdp5g9l" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:06:29.636: INFO: Error updating pvc gcepdp5g9l: PersistentVolumeClaim "gcepdp5g9l" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:06:31.180: INFO: Error updating pvc gcepdp5g9l: PersistentVolumeClaim "gcepdp5g9l" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:06:33.299: INFO: Error updating pvc gcepdp5g9l: PersistentVolumeClaim "gcepdp5g9l" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:06:35.318: INFO: Error updating pvc gcepdp5g9l: PersistentVolumeClaim "gcepdp5g9l" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:06:37.547: INFO: Error updating pvc gcepdp5g9l: PersistentVolumeClaim "gcepdp5g9l" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:06:39.461: INFO: Error updating pvc gcepdp5g9l: PersistentVolumeClaim "gcepdp5g9l" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:06:41.165: INFO: Error updating pvc gcepdp5g9l: PersistentVolumeClaim "gcepdp5g9l" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:06:43.277: INFO: Error updating pvc gcepdp5g9l: PersistentVolumeClaim "gcepdp5g9l" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:06:45.266: INFO: Error updating pvc gcepdp5g9l: PersistentVolumeClaim "gcepdp5g9l" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:06:47.742: INFO: Error updating pvc gcepdp5g9l: PersistentVolumeClaim "gcepdp5g9l" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:06:47.937: INFO: Error updating pvc gcepdp5g9l: PersistentVolumeClaim "gcepdp5g9l" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
STEP: Deleting pvc
Jan 16 09:06:47.937: INFO: Deleting PersistentVolumeClaim "gcepdp5g9l"
STEP: Deleting sc
Jan 16 09:06:48.703: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 8 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] volume-expand
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:147
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":1,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:06:49.386: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 94 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:06:50.423: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 103 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":2,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 36 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should have a working scale subresource [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:06:52.156: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 6 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 33 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Only supported for providers [openstack] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1080
------------------------------
... skipping 13 lines ...
Jan 16 09:06:50.933: INFO: stderr: ""
Jan 16 09:06:50.933: INFO: stdout: "scheduler controller-manager etcd-1 etcd-0"
STEP: getting details of componentstatuses
STEP: getting status of scheduler
Jan 16 09:06:50.933: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.211.53 --kubeconfig=/workspace/.kube/config get componentstatuses scheduler'
Jan 16 09:06:51.173: INFO: stderr: ""
Jan 16 09:06:51.173: INFO: stdout: "NAME        STATUS    MESSAGE   ERROR\nscheduler   Healthy   ok        \n"
STEP: getting status of controller-manager
Jan 16 09:06:51.173: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.211.53 --kubeconfig=/workspace/.kube/config get componentstatuses controller-manager'
Jan 16 09:06:51.510: INFO: stderr: ""
Jan 16 09:06:51.511: INFO: stdout: "NAME                 STATUS    MESSAGE   ERROR\ncontroller-manager   Healthy   ok        \n"
STEP: getting status of etcd-1
Jan 16 09:06:51.511: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.211.53 --kubeconfig=/workspace/.kube/config get componentstatuses etcd-1'
Jan 16 09:06:51.860: INFO: stderr: ""
Jan 16 09:06:51.860: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-1   Healthy   {\"health\":\"true\"}   \n"
STEP: getting status of etcd-0
Jan 16 09:06:51.860: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.211.53 --kubeconfig=/workspace/.kube/config get componentstatuses etcd-0'
Jan 16 09:06:52.463: INFO: stderr: ""
Jan 16 09:06:52.463: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-0   Healthy   {\"health\":\"true\"}   \n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:06:52.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9296" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":2,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:06:52.898: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:06:52.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 100 lines ...
• [SLOW TEST:37.454 seconds]
[sig-storage] PVC Protection
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify "immediate" deletion of a PVC that is not in active use by a pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:106
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify \"immediate\" deletion of a PVC that is not in active use by a pod","total":-1,"completed":3,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:06:57.725: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 164 lines ...
• [SLOW TEST:8.921 seconds]
[sig-storage] Projected configMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:07:01.110: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 72 lines ...
STEP: Deleting the previously created pod
Jan 16 09:06:40.273: INFO: Deleting pod "pvc-volume-tester-v69rd" in namespace "csi-mock-volumes-665"
Jan 16 09:06:40.688: INFO: Wait up to 5m0s for pod "pvc-volume-tester-v69rd" to be fully deleted
STEP: Checking CSI driver logs
Jan 16 09:06:51.204: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-665","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-665","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-665","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-665","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-825da2b6-7926-4692-be7a-a681729a59d7","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-825da2b6-7926-4692-be7a-a681729a59d7"}}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-665","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-825da2b6-7926-4692-be7a-a681729a59d7","storage.kubernetes.io/csiProvisionerIdentity":"1579165588052-8081-csi-mock-csi-mock-volumes-665"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-825da2b6-7926-4692-be7a-a681729a59d7/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-825da2b6-7926-4692-be7a-a681729a59d7","storage.kubernetes.io/csiProvisionerIdentity":"1579165588052-8081-csi-mock-csi-mock-volumes-665"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-825da2b6-7926-4692-be7a-a681729a59d7/globalmount","target_path":"/var/lib/kubelet/pods/7ecf6431-5f27-4bd6-81bd-98fcdf3f7cbe/volumes/kubernetes.io~csi/pvc-825da2b6-7926-4692-be7a-a681729a59d7/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-825da2b6-7926-4692-be7a-a681729a59d7","storage.kubernetes.io/csiProvisionerIdentity":"1579165588052-8081-csi-mock-csi-mock-volumes-665"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/7ecf6431-5f27-4bd6-81bd-98fcdf3f7cbe/volumes/kubernetes.io~csi/pvc-825da2b6-7926-4692-be7a-a681729a59d7/mount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-825da2b6-7926-4692-be7a-a681729a59d7/globalmount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerUnpublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-665"},"Response":{},"Error":""}

Jan 16 09:06:51.205: INFO: Found NodeUnpublishVolume: {Method:/csi.v1.Node/NodeUnpublishVolume Request:{VolumeContext:map[]}}
STEP: Deleting pod pvc-volume-tester-v69rd
Jan 16 09:06:51.205: INFO: Deleting pod "pvc-volume-tester-v69rd" in namespace "csi-mock-volumes-665"
STEP: Deleting claim pvc-58bv6
Jan 16 09:06:51.539: INFO: Waiting up to 2m0s for PersistentVolume pvc-825da2b6-7926-4692-be7a-a681729a59d7 to get deleted
... skipping 38 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:296
    should not be passed when podInfoOnMount=false
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:346
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":2,"skipped":19,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:07:02.646: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 133 lines ...
      Only supported for providers [vsphere] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1383
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:06:15.245: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 71 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":3,"skipped":12,"failed":0}

SS
------------------------------
[BeforeEach] [sig-scheduling] PreemptionExecutionPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 32 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:07:03.011: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-7054
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap that has name configmap-test-emptyKey-1eb072c7-d517-4583-9a0e-7e4609f7ca3e
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:07:05.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7054" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":4,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:07:05.956: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 119 lines ...
• [SLOW TEST:18.457 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":3,"skipped":33,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:07:11.384: INFO: Distro gci doesn't support ntfs -- skipping
... skipping 73 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":2,"skipped":13,"failed":0}

SSS
------------------------------
[BeforeEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 23 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when creating containers with AllowPrivilegeEscalation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:290
    should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:329
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":3,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
... skipping 64 lines ...
Jan 16 09:06:26.176: INFO: PersistentVolumeClaim csi-hostpaththrgj found but phase is Pending instead of Bound.
Jan 16 09:06:28.439: INFO: PersistentVolumeClaim csi-hostpaththrgj found but phase is Pending instead of Bound.
Jan 16 09:06:30.517: INFO: PersistentVolumeClaim csi-hostpaththrgj found but phase is Pending instead of Bound.
Jan 16 09:06:32.594: INFO: PersistentVolumeClaim csi-hostpaththrgj found and phase=Bound (27.641812332s)
STEP: Expanding non-expandable pvc
Jan 16 09:06:32.742: INFO: currentPvcSize {{1048576 0} {<nil>} 1Mi BinarySI}, newSize {{1074790400 0} {<nil>}  BinarySI}
Jan 16 09:06:33.132: INFO: Error updating pvc csi-hostpaththrgj: persistentvolumeclaims "csi-hostpaththrgj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:06:35.506: INFO: Error updating pvc csi-hostpaththrgj: persistentvolumeclaims "csi-hostpaththrgj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:06:37.686: INFO: Error updating pvc csi-hostpaththrgj: persistentvolumeclaims "csi-hostpaththrgj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:06:39.741: INFO: Error updating pvc csi-hostpaththrgj: persistentvolumeclaims "csi-hostpaththrgj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:06:41.359: INFO: Error updating pvc csi-hostpaththrgj: persistentvolumeclaims "csi-hostpaththrgj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:06:43.421: INFO: Error updating pvc csi-hostpaththrgj: persistentvolumeclaims "csi-hostpaththrgj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:06:45.439: INFO: Error updating pvc csi-hostpaththrgj: persistentvolumeclaims "csi-hostpaththrgj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:06:47.410: INFO: Error updating pvc csi-hostpaththrgj: persistentvolumeclaims "csi-hostpaththrgj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:06:49.531: INFO: Error updating pvc csi-hostpaththrgj: persistentvolumeclaims "csi-hostpaththrgj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:06:51.252: INFO: Error updating pvc csi-hostpaththrgj: persistentvolumeclaims "csi-hostpaththrgj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:06:53.719: INFO: Error updating pvc csi-hostpaththrgj: persistentvolumeclaims "csi-hostpaththrgj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:06:55.714: INFO: Error updating pvc csi-hostpaththrgj: persistentvolumeclaims "csi-hostpaththrgj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:06:57.414: INFO: Error updating pvc csi-hostpaththrgj: persistentvolumeclaims "csi-hostpaththrgj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:06:59.351: INFO: Error updating pvc csi-hostpaththrgj: persistentvolumeclaims "csi-hostpaththrgj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:07:01.325: INFO: Error updating pvc csi-hostpaththrgj: persistentvolumeclaims "csi-hostpaththrgj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:07:03.737: INFO: Error updating pvc csi-hostpaththrgj: persistentvolumeclaims "csi-hostpaththrgj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:07:04.263: INFO: Error updating pvc csi-hostpaththrgj: persistentvolumeclaims "csi-hostpaththrgj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Jan 16 09:07:04.263: INFO: Deleting PersistentVolumeClaim "csi-hostpaththrgj"
Jan 16 09:07:04.599: INFO: Waiting up to 5m0s for PersistentVolume pvc-bf1a21c3-edbf-4864-aa7c-ec0cef88011a to get deleted
Jan 16 09:07:04.770: INFO: PersistentVolume pvc-bf1a21c3-edbf-4864-aa7c-ec0cef88011a found and phase=Bound (171.194522ms)
Jan 16 09:07:09.834: INFO: PersistentVolume pvc-bf1a21c3-edbf-4864-aa7c-ec0cef88011a was removed
STEP: Deleting sc
... skipping 47 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:147
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:07:23.917: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:07:23.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 157 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (ext3)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ext3)] volumes should store data","total":-1,"completed":1,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:07:24.942: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:07:24.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 100 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1054
    should create/apply a valid CR for CRD with validation schema
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1073
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR for CRD with validation schema","total":-1,"completed":4,"skipped":40,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:07:29.914: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 285 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:07:34.105: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 48 lines ...
• [SLOW TEST:10.393 seconds]
[k8s.io] [sig-node] Security Context
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:76
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":3,"skipped":8,"failed":0}
[BeforeEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:07:24.264: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-7255
... skipping 53 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  load AppArmor profiles
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31
    should enforce an AppArmor profile
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile","total":-1,"completed":2,"skipped":18,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:07:40.650: INFO: Only supported for providers [vsphere] (not gce)
... skipping 179 lines ...
      Driver hostPath doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":1,"skipped":9,"failed":0}
[BeforeEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:07:30.437: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2441
... skipping 167 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":39,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:07:43.287: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 72 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:07:44.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9751" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":3,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:07:44.929: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:07:44.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 90 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] provisioning
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should provision storage with mount options
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:173
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":2,"skipped":5,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":48,"failed":0}
[BeforeEach] [sig-storage] Volume Placement
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:07:44.346: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename volume-placement
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-placement-5431
... skipping 40 lines ...
      Distro gci doesn't support ntfs -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:159
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":4,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:07:06.170: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 50 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":5,"skipped":12,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:07:48.932: INFO: Driver cinder doesn't support ntfs -- skipping
... skipping 78 lines ...
• [SLOW TEST:58.762 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should implement service.kubernetes.io/headless
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2494
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":2,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:07:49.250: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 123 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:63
[It] should support sysctls
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:67
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 81 lines ...
• [SLOW TEST:68.904 seconds]
[sig-network] Networking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should check kube-proxy urls
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:148
------------------------------
{"msg":"PASSED [sig-network] Networking should check kube-proxy urls","total":-1,"completed":2,"skipped":9,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:7.263 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":32,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:07:56.608: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 43 lines ...
• [SLOW TEST:5.283 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should prevent NodePort collisions
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1752
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":3,"skipped":12,"failed":0}

SSS
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls","total":-1,"completed":4,"skipped":41,"failed":0}
[BeforeEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:07:52.158: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-9842
... skipping 21 lines ...
• [SLOW TEST:6.677 seconds]
[sig-storage] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:07:58.836: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:07:58.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 72 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":4,"skipped":24,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:07:58.879: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 98 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Only supported for providers [azure] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1512
------------------------------
... skipping 192 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193

      Only supported for providers [azure] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1512
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":2,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:07:34.342: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 55 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:08:01.611: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:08:01.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 204 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:06:05.389: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-4161
... skipping 65 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume one after the other
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:08:06.408: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:08:06.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 52 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":4,"skipped":42,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:8.826 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:08:09.013: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 63 lines ...
• [SLOW TEST:10.107 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":4,"skipped":13,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
... skipping 202 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":5,"skipped":59,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:08:11.908: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 97 lines ...
• [SLOW TEST:5.690 seconds]
[sig-storage] PV Protection
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify "immediate" deletion of a PV that is not bound to a PVC
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:98
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":3,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:08:12.107: INFO: Driver vsphere doesn't support ext3 -- skipping
... skipping 110 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:08:13.202: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:08:13.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 108 lines ...
• [SLOW TEST:13.168 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":5,"skipped":51,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 69 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume at the same time
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:243
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:08:14.617: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:08:14.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 43 lines ...
• [SLOW TEST:8.006 seconds]
[sig-api-machinery] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":48,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:08:14.704: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-4742
... skipping 18 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1946
    should create a job from an image, then delete the job  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job  [Conformance]","total":-1,"completed":6,"skipped":48,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 194 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:08:34.002: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:08:34.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 87 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":4,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:08:35.118: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:08:35.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 49 lines ...
• [SLOW TEST:8.646 seconds]
[sig-auth] PodSecurityPolicy
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should enforce the restricted policy.PodSecurityPolicy
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:85
------------------------------
{"msg":"PASSED [sig-auth] PodSecurityPolicy should enforce the restricted policy.PodSecurityPolicy","total":-1,"completed":7,"skipped":49,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:08:36.031: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 161 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":4,"skipped":15,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:08:38.132: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 83 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392

      Driver local doesn't support InlineVolume -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for pod-Service: udp","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:08:02.670: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 59 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":2,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:08:44.857: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:08:44.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 61 lines ...
• [SLOW TEST:33.905 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":4,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:08:46.026: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:08:46.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 35 lines ...
Jan 16 09:08:25.328: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:08:44.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9537" for this suite.
STEP: Destroying namespace "webhook-9537-markers" for this suite.
... skipping 4 lines ...
• [SLOW TEST:33.558 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":4,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:08:46.778: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:08:46.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 108 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":6,"skipped":52,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:08:47.248: INFO: Distro gci doesn't support ntfs -- skipping
... skipping 34 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:08:48.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8865" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":5,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:08:48.512: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:08:48.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 12 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":6,"skipped":18,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes GCEPD
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:08:05.793: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename pv
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pv-5440
... skipping 9 lines ...
Jan 16 09:08:11.951: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-8z68j] to have phase Bound
Jan 16 09:08:12.180: INFO: PersistentVolumeClaim pvc-8z68j found but phase is Pending instead of Bound.
Jan 16 09:08:14.413: INFO: PersistentVolumeClaim pvc-8z68j found and phase=Bound (2.462513393s)
Jan 16 09:08:14.413: INFO: Waiting up to 3m0s for PersistentVolume gce-nvbq2 to have phase Bound
Jan 16 09:08:14.666: INFO: PersistentVolume gce-nvbq2 found and phase=Bound (253.177423ms)
STEP: Creating the Client Pod
[It] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:139
STEP: Deleting the Persistent Volume
Jan 16 09:08:30.362: INFO: Deleting PersistentVolume "gce-nvbq2"
STEP: Deleting the client pod
Jan 16 09:08:31.025: INFO: Deleting pod "pvc-tester-5dm9c" in namespace "pv-5440"
Jan 16 09:08:31.394: INFO: Wait up to 5m0s for pod "pvc-tester-5dm9c" to be fully deleted
... skipping 14 lines ...
Jan 16 09:08:49.572: INFO: Successfully deleted PD "bootstrap-e2e-103d5bb8-41f1-4328-aed6-23a8a85d10f1".


• [SLOW TEST:43.779 seconds]
[sig-storage] PersistentVolumes GCEPD
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:139
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes GCEPD should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach","total":-1,"completed":7,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:08:49.574: INFO: Driver nfs doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:08:49.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 19 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:08:38.162: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-4106
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:08:54.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4106" for this suite.


• [SLOW TEST:17.238 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":5,"skipped":31,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:06:27.360: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 96 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":7,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:08:56.161: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 167 lines ...
• [SLOW TEST:9.602 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":20,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:08:58.127: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 108 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":3,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:09:01.562: INFO: Driver gluster doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:09:01.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 21 lines ...
Jan 16 09:08:11.766: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-1466
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Jan 16 09:08:13.203: INFO: PodSpec: initContainers in spec.initContainers
Jan 16 09:09:01.028: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-9e3239cf-2b47-4292-9a12-1800e8fe1f86", GenerateName:"", Namespace:"init-container-1466", SelfLink:"/api/v1/namespaces/init-container-1466/pods/pod-init-9e3239cf-2b47-4292-9a12-1800e8fe1f86", UID:"62548a1a-2c72-4414-a8e6-984fc6313e2e", ResourceVersion:"6348", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63714762493, loc:(*time.Location)(0x7bb7ec0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"203287765"}, Annotations:map[string]string{"kubernetes.io/psp":"e2e-test-privileged-psp"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-fvhv2", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002a10dc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fvhv2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fvhv2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fvhv2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0032d3ee0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"bootstrap-e2e-minion-group-zb1j", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0023286c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0032d3f60)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0032d3f80)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0032d3f88), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0032d3f8c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714762493, loc:(*time.Location)(0x7bb7ec0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714762493, loc:(*time.Location)(0x7bb7ec0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714762493, loc:(*time.Location)(0x7bb7ec0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714762493, loc:(*time.Location)(0x7bb7ec0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.5", PodIP:"10.64.1.50", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.64.1.50"}}, StartTime:(*v1.Time)(0xc00340aec0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002a552d0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002a55340)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"docker://31cae6edd53234d01de03e2c52b151b81d125ae629eda7b6591a4939d8bb6ef8", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00340af00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00340aee0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00329200f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:09:01.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1466" for this suite.


• [SLOW TEST:50.194 seconds]
[k8s.io] InitContainer [NodeConformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":5,"skipped":27,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:09:01.966: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 63 lines ...
• [SLOW TEST:50.981 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":6,"skipped":71,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:93
Jan 16 09:09:02.907: INFO: Driver "nfs" does not support volume expansion - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
... skipping 86 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:09:03.548: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:09:03.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 62 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when running a container with a new image
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:263
      should not be able to pull image from invalid registry [NodeConformance]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:369
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":3,"skipped":14,"failed":0}

SS
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:07:35.612: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 68 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (ext4)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ext4)] volumes should store data","total":-1,"completed":5,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:09:06.305: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:09:06.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 81 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":8,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:09:09.870: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 105 lines ...
STEP: Wait for the deployment to be ready
Jan 16 09:09:04.094: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 16 09:09:07.047: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714762544, loc:(*time.Location)(0x7bb7ec0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714762544, loc:(*time.Location)(0x7bb7ec0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714762544, loc:(*time.Location)(0x7bb7ec0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714762543, loc:(*time.Location)(0x7bb7ec0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 16 09:09:11.113: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:09:12.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6260" for this suite.
... skipping 2 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101


• [SLOW TEST:17.564 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":7,"skipped":34,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 42 lines ...
• [SLOW TEST:31.378 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":5,"skipped":15,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:09:17.428: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 15 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":2,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:08:37.683: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 56 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 118 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":3,"skipped":37,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:09:20.004: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 6 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 45 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should resize volume when PVC is edited while pod is using it
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:220
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":3,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 13 lines ...
Jan 16 09:08:55.617: INFO: Waiting for PV local-pvsmz62 to bind to PVC pvc-thdst
Jan 16 09:08:55.617: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-thdst] to have phase Bound
Jan 16 09:08:55.995: INFO: PersistentVolumeClaim pvc-thdst found but phase is Pending instead of Bound.
Jan 16 09:08:58.183: INFO: PersistentVolumeClaim pvc-thdst found and phase=Bound (2.566547966s)
Jan 16 09:08:58.183: INFO: Waiting up to 3m0s for PersistentVolume local-pvsmz62 to have phase Bound
Jan 16 09:08:58.416: INFO: PersistentVolume local-pvsmz62 found and phase=Bound (232.455747ms)
[It] should fail scheduling due to different NodeSelector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:364
STEP: local-volume-type: dir
STEP: Initializing test volumes
Jan 16 09:08:58.815: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1fbddb90-d3a4-4fa0-b654-a73440b580c7] Namespace:persistent-local-volumes-test-1435 PodName:hostexec-bootstrap-e2e-minion-group-451g-lprqk ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jan 16 09:08:58.815: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Creating local PVCs and PVs
... skipping 23 lines ...

• [SLOW TEST:77.008 seconds]
[sig-storage] PersistentVolumes-local 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:338
    should fail scheduling due to different NodeSelector
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:364
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":7,"skipped":55,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:09:26.041: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:09:26.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 78 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl apply
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:922
    should reuse port when apply to an existing SVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:937
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC","total":-1,"completed":8,"skipped":36,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:09:26.222: INFO: Only supported for providers [openstack] (not gce)
... skipping 105 lines ...
      Driver supports dynamic provisioning, skipping InlineVolume pattern

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:688
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:08:31.387: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 69 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":2,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:09:28.932: INFO: Only supported for providers [aws] (not gce)
... skipping 262 lines ...
Jan 16 09:08:26.956: INFO: Pod "pod-subpath-test-dynamicpv-zzht": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.195469242s
Jan 16 09:08:29.258: INFO: Pod "pod-subpath-test-dynamicpv-zzht": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.497183073s
Jan 16 09:08:31.504: INFO: Pod "pod-subpath-test-dynamicpv-zzht": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.742747668s
Jan 16 09:08:33.693: INFO: Pod "pod-subpath-test-dynamicpv-zzht": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.932366593s
Jan 16 09:08:35.756: INFO: Pod "pod-subpath-test-dynamicpv-zzht": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.995554511s
Jan 16 09:08:37.819: INFO: Pod "pod-subpath-test-dynamicpv-zzht": Phase="Pending", Reason="", readiness=false. Elapsed: 1m47.057929169s
Jan 16 09:08:39.982: INFO: Pod "pod-subpath-test-dynamicpv-zzht": Phase="Failed", Reason="", readiness=false. Elapsed: 1m49.221393122s
Jan 16 09:08:41.416: INFO: Output of node "bootstrap-e2e-minion-group-451g" pod "pod-subpath-test-dynamicpv-zzht" container "init-volume-dynamicpv-zzht": 
Jan 16 09:08:43.158: INFO: Output of node "bootstrap-e2e-minion-group-451g" pod "pod-subpath-test-dynamicpv-zzht" container "test-init-subpath-dynamicpv-zzht": content of file "/test-volume/test-file": mount-tester new file

mode of file "/test-volume/test-file": -rw-r--r--

Jan 16 09:08:44.227: INFO: Output of node "bootstrap-e2e-minion-group-451g" pod "pod-subpath-test-dynamicpv-zzht" container "test-container-subpath-dynamicpv-zzht": 
Jan 16 09:08:44.586: INFO: Output of node "bootstrap-e2e-minion-group-451g" pod "pod-subpath-test-dynamicpv-zzht" container "test-container-volume-dynamicpv-zzht": content of file "/test-volume/provisioning-4594/test-file": mount-tester new file


STEP: delete the pod
Jan 16 09:08:44.976: INFO: Waiting for pod pod-subpath-test-dynamicpv-zzht to disappear
Jan 16 09:08:45.170: INFO: Pod pod-subpath-test-dynamicpv-zzht no longer exists
Jan 16 09:08:45.170: FAIL: Unexpected error:
    <*errors.errorString | 0xc002de39f0>: {
        s: "expected pod \"pod-subpath-test-dynamicpv-zzht\" success: pod \"pod-subpath-test-dynamicpv-zzht\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-16 09:08:01 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-16 09:06:54 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test-container-subpath-dynamicpv-zzht test-container-volume-dynamicpv-zzht]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-16 09:06:54 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test-container-subpath-dynamicpv-zzht test-container-volume-dynamicpv-zzht]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-16 09:06:54 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.6 PodIP:10.64.2.32 PodIPs:[{IP:10.64.2.32}] StartTime:2020-01-16 09:06:54 +0000 UTC InitContainerStatuses:[{Name:init-volume-dynamicpv-zzht State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2020-01-16 09:07:12 +0000 UTC,FinishedAt:2020-01-16 09:07:11 +0000 UTC,ContainerID:docker://ec0f8ca162ebeae86257273252544be11d4cd794f79b954a35811c8f6a3218c2,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:busybox:1.29 ImageID:docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 ContainerID:docker://ec0f8ca162ebeae86257273252544be11d4cd794f79b954a35811c8f6a3218c2 Started:<nil>} {Name:test-init-subpath-dynamicpv-zzht State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2020-01-16 09:07:37 +0000 UTC,FinishedAt:2020-01-16 09:07:36 +0000 UTC,ContainerID:docker://08ab8a648feb8ceafbed13a221e70059eb1d8544a8373133b30862aa8b96a9fd,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:docker://08ab8a648feb8ceafbed13a221e70059eb1d8544a8373133b30862aa8b96a9fd Started:<nil>}] ContainerStatuses:[{Name:test-container-subpath-dynamicpv-zzht State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2020-01-16 09:08:09 +0000 UTC,FinishedAt:2020-01-16 09:08:09 +0000 UTC,ContainerID:docker://8225b1cf1a6eac21951b05dd84e99eccadd0c7ee28df90ce37978e54211d6730,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:docker://8225b1cf1a6eac21951b05dd84e99eccadd0c7ee28df90ce37978e54211d6730 Started:0xc00236fcf9} {Name:test-container-volume-dynamicpv-zzht State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:ContainerCannotRun,Message:OCI runtime start failed: container process is already dead: unknown,StartedAt:2020-01-16 09:08:10 +0000 UTC,FinishedAt:2020-01-16 09:08:10 +0000 UTC,ContainerID:docker://d9d1abab8802709fbc485a6193464b58fab98f02fc5b4355aad42bfc9749ed01,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:docker://d9d1abab8802709fbc485a6193464b58fab98f02fc5b4355aad42bfc9749ed01 Started:0xc00236fcfa}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
    }
    expected pod "pod-subpath-test-dynamicpv-zzht" success: pod "pod-subpath-test-dynamicpv-zzht" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-16 09:08:01 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-16 09:06:54 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test-container-subpath-dynamicpv-zzht test-container-volume-dynamicpv-zzht]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-16 09:06:54 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test-container-subpath-dynamicpv-zzht test-container-volume-dynamicpv-zzht]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-16 09:06:54 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.6 PodIP:10.64.2.32 PodIPs:[{IP:10.64.2.32}] StartTime:2020-01-16 09:06:54 +0000 UTC InitContainerStatuses:[{Name:init-volume-dynamicpv-zzht State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2020-01-16 09:07:12 +0000 UTC,FinishedAt:2020-01-16 09:07:11 +0000 UTC,ContainerID:docker://ec0f8ca162ebeae86257273252544be11d4cd794f79b954a35811c8f6a3218c2,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:busybox:1.29 ImageID:docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 ContainerID:docker://ec0f8ca162ebeae86257273252544be11d4cd794f79b954a35811c8f6a3218c2 Started:<nil>} {Name:test-init-subpath-dynamicpv-zzht State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2020-01-16 09:07:37 +0000 UTC,FinishedAt:2020-01-16 09:07:36 +0000 UTC,ContainerID:docker://08ab8a648feb8ceafbed13a221e70059eb1d8544a8373133b30862aa8b96a9fd,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:docker://08ab8a648feb8ceafbed13a221e70059eb1d8544a8373133b30862aa8b96a9fd Started:<nil>}] ContainerStatuses:[{Name:test-container-subpath-dynamicpv-zzht State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2020-01-16 09:08:09 +0000 UTC,FinishedAt:2020-01-16 09:08:09 +0000 UTC,ContainerID:docker://8225b1cf1a6eac21951b05dd84e99eccadd0c7ee28df90ce37978e54211d6730,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:docker://8225b1cf1a6eac21951b05dd84e99eccadd0c7ee28df90ce37978e54211d6730 Started:0xc00236fcf9} {Name:test-container-volume-dynamicpv-zzht State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:ContainerCannotRun,Message:OCI runtime start failed: container process is already dead: unknown,StartedAt:2020-01-16 09:08:10 +0000 UTC,FinishedAt:2020-01-16 09:08:10 +0000 UTC,ContainerID:docker://d9d1abab8802709fbc485a6193464b58fab98f02fc5b4355aad42bfc9749ed01,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:docker://d9d1abab8802709fbc485a6193464b58fab98f02fc5b4355aad42bfc9749ed01 Started:0xc00236fcfa}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc00025f7c0, 0x498f4bc, 0x7, 0xc002d98800, 0x1, 0xc0023d5208, 0x1, 0x1, 0x4b720e8)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:829 +0x1ee
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
... skipping 52 lines ...
Jan 16 09:09:12.046: INFO: deleting *v1.StatefulSet: provisioning-4594/csi-snapshotter
Jan 16 09:09:12.423: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-4594
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "provisioning-4594".
STEP: Found 57 events.
Jan 16 09:09:12.990: INFO: At 2020-01-16 09:06:05 +0000 UTC - event for csi-hostpath-attacher: {statefulset-controller } FailedCreate: create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods "csi-hostpath-attacher-0" is forbidden: unable to validate against any pod security policy: []
Jan 16 09:09:12.990: INFO: At 2020-01-16 09:06:05 +0000 UTC - event for csi-hostpathplugin: {statefulset-controller } SuccessfulCreate: create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful
Jan 16 09:09:12.990: INFO: At 2020-01-16 09:06:06 +0000 UTC - event for csi-hostpath-provisioner: {statefulset-controller } FailedCreate: create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods "csi-hostpath-provisioner-0" is forbidden: unable to validate against any pod security policy: []
Jan 16 09:09:12.990: INFO: At 2020-01-16 09:06:06 +0000 UTC - event for csi-hostpath-resizer: {statefulset-controller } FailedCreate: create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods "csi-hostpath-resizer-0" is forbidden: unable to validate against any pod security policy: []
Jan 16 09:09:12.990: INFO: At 2020-01-16 09:06:06 +0000 UTC - event for csi-snapshotter: {statefulset-controller } SuccessfulCreate: create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful
Jan 16 09:09:12.990: INFO: At 2020-01-16 09:06:07 +0000 UTC - event for csi-hostpath-attacher: {statefulset-controller } SuccessfulCreate: create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful
Jan 16 09:09:12.990: INFO: At 2020-01-16 09:06:07 +0000 UTC - event for csi-hostpath-provisioner: {statefulset-controller } SuccessfulCreate: create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful
Jan 16 09:09:12.990: INFO: At 2020-01-16 09:06:07 +0000 UTC - event for csi-hostpath-resizer: {statefulset-controller } SuccessfulCreate: create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful
Jan 16 09:09:12.990: INFO: At 2020-01-16 09:06:07 +0000 UTC - event for csi-hostpathxq5zc: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "csi-hostpath-provisioning-4594" or manually created by system administrator
Jan 16 09:09:12.990: INFO: At 2020-01-16 09:06:11 +0000 UTC - event for csi-hostpathplugin-0: {kubelet bootstrap-e2e-minion-group-451g} Pulling: Pulling image "quay.io/k8scsi/csi-node-driver-registrar:v1.2.0"
... skipping 35 lines ...
Jan 16 09:09:12.990: INFO: At 2020-01-16 09:07:42 +0000 UTC - event for pod-subpath-test-dynamicpv-zzht: {kubelet bootstrap-e2e-minion-group-451g} Started: Started container test-init-subpath-dynamicpv-zzht
Jan 16 09:09:12.990: INFO: At 2020-01-16 09:08:01 +0000 UTC - event for pod-subpath-test-dynamicpv-zzht: {kubelet bootstrap-e2e-minion-group-451g} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine
Jan 16 09:09:12.990: INFO: At 2020-01-16 09:08:02 +0000 UTC - event for pod-subpath-test-dynamicpv-zzht: {kubelet bootstrap-e2e-minion-group-451g} Created: Created container test-container-subpath-dynamicpv-zzht
Jan 16 09:09:12.990: INFO: At 2020-01-16 09:08:09 +0000 UTC - event for pod-subpath-test-dynamicpv-zzht: {kubelet bootstrap-e2e-minion-group-451g} Started: Started container test-container-subpath-dynamicpv-zzht
Jan 16 09:09:12.990: INFO: At 2020-01-16 09:08:09 +0000 UTC - event for pod-subpath-test-dynamicpv-zzht: {kubelet bootstrap-e2e-minion-group-451g} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine
Jan 16 09:09:12.991: INFO: At 2020-01-16 09:08:10 +0000 UTC - event for pod-subpath-test-dynamicpv-zzht: {kubelet bootstrap-e2e-minion-group-451g} Created: Created container test-container-volume-dynamicpv-zzht
Jan 16 09:09:12.991: INFO: At 2020-01-16 09:08:17 +0000 UTC - event for csi-hostpathplugin-0: {kubelet bootstrap-e2e-minion-group-451g} Unhealthy: Liveness probe failed: HTTP probe failed with statuscode: 500
Jan 16 09:09:12.991: INFO: At 2020-01-16 09:08:20 +0000 UTC - event for pod-subpath-test-dynamicpv-zzht: {kubelet bootstrap-e2e-minion-group-451g} Failed: Error: failed to start container "test-container-volume-dynamicpv-zzht": Error response from daemon: OCI runtime start failed: container process is already dead: unknown
Jan 16 09:09:12.991: INFO: At 2020-01-16 09:09:08 +0000 UTC - event for csi-hostpath-attacher-0: {kubelet bootstrap-e2e-minion-group-451g} Killing: Stopping container csi-attacher
Jan 16 09:09:12.991: INFO: At 2020-01-16 09:09:11 +0000 UTC - event for csi-hostpath-provisioner-0: {kubelet bootstrap-e2e-minion-group-451g} Killing: Stopping container csi-provisioner
Jan 16 09:09:12.991: INFO: At 2020-01-16 09:09:11 +0000 UTC - event for csi-hostpath-resizer-0: {kubelet bootstrap-e2e-minion-group-451g} Killing: Stopping container csi-resizer
Jan 16 09:09:12.991: INFO: At 2020-01-16 09:09:12 +0000 UTC - event for csi-snapshotter-0: {kubelet bootstrap-e2e-minion-group-451g} Killing: Stopping container csi-snapshotter
Jan 16 09:09:13.178: INFO: POD                         NODE                             PHASE    GRACE  CONDITIONS
Jan 16 09:09:13.178: INFO: csi-hostpath-attacher-0     bootstrap-e2e-minion-group-451g  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-16 09:06:07 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-16 09:06:48 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-16 09:06:48 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-16 09:06:07 +0000 UTC  }]
Jan 16 09:09:13.178: INFO: csi-hostpath-provisioner-0  bootstrap-e2e-minion-group-451g  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-16 09:06:07 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-16 09:06:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-16 09:06:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-16 09:06:07 +0000 UTC  }]
Jan 16 09:09:13.178: INFO: csi-hostpath-resizer-0      bootstrap-e2e-minion-group-451g  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-16 09:06:07 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-16 09:06:37 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-16 09:06:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-16 09:06:07 +0000 UTC  }]
Jan 16 09:09:13.178: INFO: csi-hostpathplugin-0        bootstrap-e2e-minion-group-451g  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-16 09:06:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-16 09:07:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-16 09:07:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-16 09:06:05 +0000 UTC  }]
Jan 16 09:09:13.178: INFO: csi-snapshotter-0           bootstrap-e2e-minion-group-451g  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-16 09:06:07 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-16 09:06:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-16 09:06:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-16 09:06:07 +0000 UTC  }]
Jan 16 09:09:13.178: INFO: 
Jan 16 09:09:13.316: INFO: 
Logging node info for node bootstrap-e2e-master
Jan 16 09:09:13.464: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master   /api/v1/nodes/bootstrap-e2e-master 6192826f-f447-40e2-99eb-17cd9d1ac0da 6137 0 2020-01-16 09:02:45 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://kubernetes-jkns-e2e-gce-serial/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3876802560 0} {<nil>} 3785940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3614658560 0} {<nil>} 3529940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-16 09:02:57 +0000 UTC,LastTransitionTime:2020-01-16 09:02:57 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-16 09:08:52 +0000 UTC,LastTransitionTime:2020-01-16 09:02:45 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-16 09:08:52 +0000 UTC,LastTransitionTime:2020-01-16 09:02:45 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-16 09:08:52 +0000 UTC,LastTransitionTime:2020-01-16 09:02:45 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-16 09:08:52 +0000 UTC,LastTransitionTime:2020-01-16 09:02:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.82.211.53,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.kubernetes-jkns-e2e-gce-serial.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.kubernetes-jkns-e2e-gce-serial.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b7ab40631c773dc7df1585cc30c29abc,SystemUUID:b7ab4063-1c77-3dc7-df15-85cc30c29abc,BootID:fee620de-c3cb-42e2-ba2c-70cb35771e85,KernelVersion:4.19.76+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://19.3.1,KubeletVersion:v1.18.0-alpha.1.810+f437ff75d45517,KubeProxyVersion:v1.18.0-alpha.1.810+f437ff75d45517,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.18.0-alpha.1.810_f437ff75d45517],SizeBytes:214130864,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.18.0-alpha.1.810_f437ff75d45517],SizeBytes:204358801,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.18.0-alpha.1.810_f437ff75d45517],SizeBytes:114414862,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager:v9.0.2],SizeBytes:83076028,},ContainerImage{Names:[k8s.gcr.io/etcd-empty-dir-cleanup@sha256:484662e55e0705caed26c6fb8632097457f43ce685756531da7a76319a7dcee1 k8s.gcr.io/etcd-empty-dir-cleanup:3.4.3.0],SizeBytes:77408900,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-glbc-amd64@sha256:1a859d138b4874642e9a8709e7ab04324669c77742349b5b21b1ef8a25fef55f k8s.gcr.io/ingress-gce-glbc-amd64:v1.6.1],SizeBytes:76121176,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Jan 16 09:09:13.464: INFO: 
Logging kubelet events for node bootstrap-e2e-master
Jan 16 09:09:13.873: INFO: 
Logging pods the kubelet thinks is on node bootstrap-e2e-master
Jan 16 09:09:14.394: INFO: kube-apiserver-bootstrap-e2e-master started at 2020-01-16 09:01:38 +0000 UTC (0+1 container statuses recorded)
Jan 16 09:09:14.394: INFO: 	Container kube-apiserver ready: true, restart count 0
... skipping 18 lines ...
Jan 16 09:09:14.394: INFO: kube-scheduler-bootstrap-e2e-master started at 2020-01-16 09:01:39 +0000 UTC (0+1 container statuses recorded)
Jan 16 09:09:14.394: INFO: 	Container kube-scheduler ready: true, restart count 0
Jan 16 09:09:15.725: INFO: 
Latency metrics for node bootstrap-e2e-master
Jan 16 09:09:15.725: INFO: 
Logging node info for node bootstrap-e2e-minion-group-451g
Jan 16 09:09:15.931: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-451g   /api/v1/nodes/bootstrap-e2e-minion-group-451g 2e683fcb-ea7f-4fe5-a312-53fc7a4e952c 6085 0 2020-01-16 09:02:49 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-451g kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-451g topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2075":"bootstrap-e2e-minion-group-451g","csi-hostpath-ephemeral-8495":"bootstrap-e2e-minion-group-451g","csi-hostpath-provisioning-4594":"bootstrap-e2e-minion-group-451g","csi-hostpath-provisioning-5242":"bootstrap-e2e-minion-group-451g","csi-mock-csi-mock-volumes-8146":"csi-mock-csi-mock-volumes-8146"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://kubernetes-jkns-e2e-gce-serial/us-west1-b/bootstrap-e2e-minion-group-451g,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7840235520 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7578091520 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-16 09:07:51 +0000 UTC,LastTransitionTime:2020-01-16 09:02:49 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-16 09:07:51 +0000 UTC,LastTransitionTime:2020-01-16 09:02:49 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-16 09:07:51 +0000 UTC,LastTransitionTime:2020-01-16 09:02:49 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-16 09:07:51 +0000 UTC,LastTransitionTime:2020-01-16 09:02:49 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-16 09:07:51 +0000 UTC,LastTransitionTime:2020-01-16 09:02:49 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-16 09:07:51 +0000 UTC,LastTransitionTime:2020-01-16 09:02:49 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-16 09:07:51 +0000 UTC,LastTransitionTime:2020-01-16 09:02:49 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-16 09:02:57 +0000 UTC,LastTransitionTime:2020-01-16 09:02:57 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-16 09:08:50 +0000 UTC,LastTransitionTime:2020-01-16 09:02:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-16 09:08:50 +0000 UTC,LastTransitionTime:2020-01-16 09:02:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-16 09:08:50 +0000 UTC,LastTransitionTime:2020-01-16 09:02:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-16 09:08:50 +0000 UTC,LastTransitionTime:2020-01-16 09:02:50 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.6,},NodeAddress{Type:ExternalIP,Address:35.247.44.158,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-451g.c.kubernetes-jkns-e2e-gce-serial.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-451g.c.kubernetes-jkns-e2e-gce-serial.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:48434d1fb8165a077136780728da53b3,SystemUUID:48434d1f-b816-5a07-7136-780728da53b3,BootID:8982efe0-413b-4d20-bd9b-c178f54413cc,KernelVersion:4.19.76+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://19.3.1,KubeletVersion:v1.18.0-alpha.1.810+f437ff75d45517,KubeProxyVersion:v1.18.0-alpha.1.810+f437ff75d45517,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.810_f437ff75d45517],SizeBytes:134166654,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/event-exporter@sha256:ab71028f7cbc851d273bb00449e30ab743d4e3be21ed2093299f718b42df0748 k8s.gcr.io/event-exporter:v0.3.1],SizeBytes:51445475,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:e10aab64506dd46c37e89ba9c962e34b0b1c91e498721b622f71af92199a06bc quay.io/k8scsi/csi-provisioner:v1.5.0],SizeBytes:47942457,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:8f4a5b1a78441cfc8a674555b772e39b91932985618fc4054d5e73d27bc45a72 quay.io/k8scsi/csi-snapshotter:v2.0.0],SizeBytes:46317165,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:e2dd4c337f1beccd298acbd7179565579c4645125057dfdcbbb5beef977b779e quay.io/k8scsi/csi-attacher:v2.1.0],SizeBytes:46131029,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:6883482c4c4bd0eabb83315b5a9e8d9c0f34357980489a679a56441ec74c24c9 quay.io/k8scsi/csi-resizer:v0.4.0],SizeBytes:46065348,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:1d49fb3b108e6b42542e4a9b056dee308f06f88824326cde1636eea0472b799d k8s.gcr.io/prometheus-to-sd:v0.7.2],SizeBytes:42314030,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:2f80c87a67122448bd60c778dd26d4067e1f48d1ea3fefe9f92b8cd1961acfa0 quay.io/k8scsi/hostpathplugin:v1.3.0-rc1],SizeBytes:28736864,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:89cdb2a20bdec89b75e2fbd82a67567ea90b719524990e772f2704b19757188d quay.io/k8scsi/csi-node-driver-registrar:v1.2.0],SizeBytes:17057647,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-5242^cd6141c0-383f-11ea-802b-a2dc5560671b],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-5242^cd6141c0-383f-11ea-802b-a2dc5560671b,DevicePath:,},},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Jan 16 09:09:15.931: INFO: 
Logging kubelet events for node bootstrap-e2e-minion-group-451g
Jan 16 09:09:16.313: INFO: 
Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-451g
Jan 16 09:09:16.923: INFO: inline-volume-tester-brlqm started at 2020-01-16 09:06:25 +0000 UTC (0+1 container statuses recorded)
Jan 16 09:09:16.923: INFO: 	Container csi-volume-tester ready: true, restart count 0
... skipping 105 lines ...
Jan 16 09:09:16.924: INFO: hostexec-bootstrap-e2e-minion-group-451g-zfglg started at 2020-01-16 09:06:35 +0000 UTC (0+1 container statuses recorded)
Jan 16 09:09:16.924: INFO: 	Container agnhost ready: true, restart count 0
Jan 16 09:09:21.904: INFO: 
Latency metrics for node bootstrap-e2e-minion-group-451g
Jan 16 09:09:21.904: INFO: 
Logging node info for node bootstrap-e2e-minion-group-7fqk
Jan 16 09:09:22.186: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-7fqk   /api/v1/nodes/bootstrap-e2e-minion-group-7fqk 9b9440ab-8794-4e90-aca4-58a88c71d48e 4790 0 2020-01-16 09:02:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-7fqk kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.64.4.0/24,DoNotUseExternalID:,ProviderID:gce://kubernetes-jkns-e2e-gce-serial/us-west1-b/bootstrap-e2e-minion-group-7fqk,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.4.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7840235520 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7578091520 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-16 09:07:52 +0000 UTC,LastTransitionTime:2020-01-16 09:02:50 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-16 09:07:52 +0000 UTC,LastTransitionTime:2020-01-16 09:02:51 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-16 09:07:52 +0000 UTC,LastTransitionTime:2020-01-16 09:02:51 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-16 09:07:52 +0000 UTC,LastTransitionTime:2020-01-16 09:02:51 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-16 09:07:52 +0000 UTC,LastTransitionTime:2020-01-16 09:02:51 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-16 09:07:52 +0000 UTC,LastTransitionTime:2020-01-16 09:02:51 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-16 09:07:52 +0000 UTC,LastTransitionTime:2020-01-16 09:02:51 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-16 09:03:06 +0000 UTC,LastTransitionTime:2020-01-16 09:03:06 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-16 09:07:25 +0000 UTC,LastTransitionTime:2020-01-16 09:02:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-16 09:07:25 +0000 UTC,LastTransitionTime:2020-01-16 09:02:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-16 09:07:25 +0000 UTC,LastTransitionTime:2020-01-16 09:02:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-16 09:07:25 +0000 UTC,LastTransitionTime:2020-01-16 09:02:50 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.83.38.199,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-7fqk.c.kubernetes-jkns-e2e-gce-serial.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-7fqk.c.kubernetes-jkns-e2e-gce-serial.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6f6e156cda0b79790afff2f5ae62d493,SystemUUID:6f6e156c-da0b-7979-0aff-f2f5ae62d493,BootID:647c4660-741a-4390-82ea-75c317d0e4c3,KernelVersion:4.19.76+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://19.3.1,KubeletVersion:v1.18.0-alpha.1.810+f437ff75d45517,KubeProxyVersion:v1.18.0-alpha.1.810+f437ff75d45517,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:332011484,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.810_f437ff75d45517],SizeBytes:134166654,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b k8s.gcr.io/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[k8s.gcr.io/addon-resizer@sha256:30b3b12e471c534949e12d2da958fdf33848d153f2a0a88565bdef7ca999b5ad k8s.gcr.io/addon-resizer:1.8.7],SizeBytes:37930718,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/apparmor-loader@sha256:1fdc224b826c4bc16b3cdf5c09d6e5b8c7aa77e2b2d81472a1316bd1606fa1bd gcr.io/kubernetes-e2e-test-images/apparmor-loader:1.0],SizeBytes:13090050,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Jan 16 09:09:22.186: INFO: 
Logging kubelet events for node bootstrap-e2e-minion-group-7fqk
Jan 16 09:09:22.392: INFO: 
Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-7fqk
Jan 16 09:09:22.660: INFO: webserver-deployment-595b5b9587-jdwh9 started at 2020-01-16 09:09:09 +0000 UTC (0+1 container statuses recorded)
Jan 16 09:09:22.660: INFO: 	Container httpd ready: true, restart count 0
... skipping 29 lines ...
Jan 16 09:09:22.660: INFO: webserver-deployment-595b5b9587-56n28 started at 2020-01-16 09:09:10 +0000 UTC (0+1 container statuses recorded)
Jan 16 09:09:22.660: INFO: 	Container httpd ready: true, restart count 0
Jan 16 09:09:24.846: INFO: 
Latency metrics for node bootstrap-e2e-minion-group-7fqk
Jan 16 09:09:24.846: INFO: 
Logging node info for node bootstrap-e2e-minion-group-8mzr
Jan 16 09:09:25.229: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-8mzr   /api/v1/nodes/bootstrap-e2e-minion-group-8mzr bd28513f-804f-4c1b-ae0d-d6cd4619149d 6890 0 2020-01-16 09:02:49 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-8mzr kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://kubernetes-jkns-e2e-gce-serial/us-west1-b/bootstrap-e2e-minion-group-8mzr,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7840235520 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7578091520 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-16 09:07:51 +0000 UTC,LastTransitionTime:2020-01-16 09:02:50 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-16 09:07:51 +0000 UTC,LastTransitionTime:2020-01-16 09:02:49 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-16 09:07:51 +0000 UTC,LastTransitionTime:2020-01-16 09:02:49 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-16 09:07:51 +0000 UTC,LastTransitionTime:2020-01-16 09:02:49 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-16 09:07:51 +0000 UTC,LastTransitionTime:2020-01-16 09:02:49 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-16 09:07:51 +0000 UTC,LastTransitionTime:2020-01-16 09:02:49 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-16 09:07:51 +0000 UTC,LastTransitionTime:2020-01-16 09:02:49 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-16 09:02:57 +0000 UTC,LastTransitionTime:2020-01-16 09:02:57 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-16 09:09:18 +0000 UTC,LastTransitionTime:2020-01-16 09:02:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-16 09:09:18 +0000 UTC,LastTransitionTime:2020-01-16 09:02:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-16 09:09:18 +0000 UTC,LastTransitionTime:2020-01-16 09:02:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-16 09:09:18 +0000 UTC,LastTransitionTime:2020-01-16 09:03:00 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.199.159.96,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-8mzr.c.kubernetes-jkns-e2e-gce-serial.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-8mzr.c.kubernetes-jkns-e2e-gce-serial.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4a8922c71b785a585d0e6c1f4cf54d27,SystemUUID:4a8922c7-1b78-5a58-5d0e-6c1f4cf54d27,BootID:1c4181fe-8f06-4af7-8cf2-c040a9f97fc3,KernelVersion:4.19.76+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://19.3.1,KubeletVersion:v1.18.0-alpha.1.810+f437ff75d45517,KubeProxyVersion:v1.18.0-alpha.1.810+f437ff75d45517,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.810_f437ff75d45517],SizeBytes:134166654,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/fluentd-gcp-scaler@sha256:4f28f10fb89506768910b858f7a18ffb996824a16d70d5ac895e49687df9ff58 k8s.gcr.io/fluentd-gcp-scaler:0.5.2],SizeBytes:90498960,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:e10aab64506dd46c37e89ba9c962e34b0b1c91e498721b622f71af92199a06bc quay.io/k8scsi/csi-provisioner:v1.5.0],SizeBytes:47942457,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:8f4a5b1a78441cfc8a674555b772e39b91932985618fc4054d5e73d27bc45a72 quay.io/k8scsi/csi-snapshotter:v2.0.0],SizeBytes:46317165,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:e2dd4c337f1beccd298acbd7179565579c4645125057dfdcbbb5beef977b779e quay.io/k8scsi/csi-attacher:v2.1.0],SizeBytes:46131029,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:6883482c4c4bd0eabb83315b5a9e8d9c0f34357980489a679a56441ec74c24c9 quay.io/k8scsi/csi-resizer:v0.4.0],SizeBytes:46065348,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:2f80c87a67122448bd60c778dd26d4067e1f48d1ea3fefe9f92b8cd1961acfa0 quay.io/k8scsi/hostpathplugin:v1.3.0-rc1],SizeBytes:28736864,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:89cdb2a20bdec89b75e2fbd82a67567ea90b719524990e772f2704b19757188d quay.io/k8scsi/csi-node-driver-registrar:v1.2.0],SizeBytes:17057647,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/apparmor-loader@sha256:1fdc224b826c4bc16b3cdf5c09d6e5b8c7aa77e2b2d81472a1316bd1606fa1bd gcr.io/kubernetes-e2e-test-images/apparmor-loader:1.0],SizeBytes:13090050,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Jan 16 09:09:25.234: INFO: 
Logging kubelet events for node bootstrap-e2e-minion-group-8mzr
Jan 16 09:09:25.542: INFO: 
Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-8mzr
Jan 16 09:09:25.974: INFO: hostexec-bootstrap-e2e-minion-group-8mzr-z4g45 started at 2020-01-16 09:09:17 +0000 UTC (0+1 container statuses recorded)
Jan 16 09:09:25.974: INFO: 	Container agnhost ready: true, restart count 0
... skipping 29 lines ...
Jan 16 09:09:25.975: INFO: 	Container metadata-proxy ready: true, restart count 0
Jan 16 09:09:25.975: INFO: 	Container prometheus-to-sd-exporter ready: true, restart count 0
Jan 16 09:09:27.041: INFO: 
Latency metrics for node bootstrap-e2e-minion-group-8mzr
Jan 16 09:09:27.041: INFO: 
Logging node info for node bootstrap-e2e-minion-group-zb1j
Jan 16 09:09:27.532: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-zb1j   /api/v1/nodes/bootstrap-e2e-minion-group-zb1j e08c6ac4-07f2-4196-b819-18a2bf48ef2a 6505 0 2020-01-16 09:02:48 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-zb1j kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-zb1j topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-8617":"bootstrap-e2e-minion-group-zb1j"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://kubernetes-jkns-e2e-gce-serial/us-west1-b/bootstrap-e2e-minion-group-zb1j,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7840251904 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7578107904 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-16 09:07:52 +0000 UTC,LastTransitionTime:2020-01-16 09:02:51 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-16 09:07:52 +0000 UTC,LastTransitionTime:2020-01-16 09:02:51 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-16 09:07:52 +0000 UTC,LastTransitionTime:2020-01-16 09:02:51 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-16 09:07:52 +0000 UTC,LastTransitionTime:2020-01-16 09:02:51 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-16 09:07:52 +0000 UTC,LastTransitionTime:2020-01-16 09:02:51 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-16 09:07:52 +0000 UTC,LastTransitionTime:2020-01-16 09:02:51 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-16 09:07:52 +0000 UTC,LastTransitionTime:2020-01-16 09:02:51 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-16 09:02:57 +0000 UTC,LastTransitionTime:2020-01-16 09:02:57 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-16 09:09:07 +0000 UTC,LastTransitionTime:2020-01-16 09:02:48 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-16 09:09:07 +0000 UTC,LastTransitionTime:2020-01-16 09:02:48 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-16 09:09:07 +0000 UTC,LastTransitionTime:2020-01-16 09:02:48 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-16 09:09:07 +0000 UTC,LastTransitionTime:2020-01-16 09:02:50 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.197.95.20,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-zb1j.c.kubernetes-jkns-e2e-gce-serial.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-zb1j.c.kubernetes-jkns-e2e-gce-serial.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:43bc35595c8f9cc66a250d3a9bb2ce55,SystemUUID:43bc3559-5c8f-9cc6-6a25-0d3a9bb2ce55,BootID:05e43e6b-e27d-40be-931f-7fc59b251f90,KernelVersion:4.19.76+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://19.3.1,KubeletVersion:v1.18.0-alpha.1.810+f437ff75d45517,KubeProxyVersion:v1.18.0-alpha.1.810+f437ff75d45517,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.810_f437ff75d45517],SizeBytes:134166654,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:e10aab64506dd46c37e89ba9c962e34b0b1c91e498721b622f71af92199a06bc quay.io/k8scsi/csi-provisioner:v1.5.0],SizeBytes:47942457,},ContainerImage{Names:[quay.io/k8scsi/snapshot-controller@sha256:b3c1c484ffe4f0bbf000bda93fb745e1b3899b08d605a08581426a1963dd3e8a quay.io/k8scsi/snapshot-controller:v2.0.0-rc2],SizeBytes:47222712,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:8f4a5b1a78441cfc8a674555b772e39b91932985618fc4054d5e73d27bc45a72 quay.io/k8scsi/csi-snapshotter:v2.0.0],SizeBytes:46317165,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:e2dd4c337f1beccd298acbd7179565579c4645125057dfdcbbb5beef977b779e quay.io/k8scsi/csi-attacher:v2.1.0],SizeBytes:46131029,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:6883482c4c4bd0eabb83315b5a9e8d9c0f34357980489a679a56441ec74c24c9 quay.io/k8scsi/csi-resizer:v0.4.0],SizeBytes:46065348,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:a2db01cfd2ae1a16f0feef274160c659c1ac5aa433e1c514de20e334cb66c674 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b k8s.gcr.io/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[k8s.gcr.io/addon-resizer@sha256:30b3b12e471c534949e12d2da958fdf33848d153f2a0a88565bdef7ca999b5ad k8s.gcr.io/addon-resizer:1.8.7],SizeBytes:37930718,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:2f80c87a67122448bd60c778dd26d4067e1f48d1ea3fefe9f92b8cd1961acfa0 quay.io/k8scsi/hostpathplugin:v1.3.0-rc1],SizeBytes:28736864,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:89cdb2a20bdec89b75e2fbd82a67567ea90b719524990e772f2704b19757188d quay.io/k8scsi/csi-node-driver-registrar:v1.2.0],SizeBytes:17057647,},ContainerImage{Names:[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64@sha256:d83d8a481145d0eb71f8bd71ae236d1c6a931dd3bdcaf80919a8ec4a4d8aff74 k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0],SizeBytes:13513083,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Jan 16 09:09:27.533: INFO: 
Logging kubelet events for node bootstrap-e2e-minion-group-zb1j
Jan 16 09:09:28.461: INFO: 
Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-zb1j
Jan 16 09:09:28.768: INFO: pod-init-9e3239cf-2b47-4292-9a12-1800e8fe1f86 started at 2020-01-16 09:08:13 +0000 UTC (2+1 container statuses recorded)
Jan 16 09:09:28.768: INFO: 	Init container init1 ready: false, restart count 3
... skipping 55 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory [It]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203

      Jan 16 09:08:45.170: Unexpected error:
          <*errors.errorString | 0xc002de39f0>: {
              s: "expected pod \"pod-subpath-test-dynamicpv-zzht\" success: pod \"pod-subpath-test-dynamicpv-zzht\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-16 09:08:01 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-16 09:06:54 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test-container-subpath-dynamicpv-zzht test-container-volume-dynamicpv-zzht]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-16 09:06:54 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test-container-subpath-dynamicpv-zzht test-container-volume-dynamicpv-zzht]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-16 09:06:54 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.6 PodIP:10.64.2.32 PodIPs:[{IP:10.64.2.32}] StartTime:2020-01-16 09:06:54 +0000 UTC InitContainerStatuses:[{Name:init-volume-dynamicpv-zzht State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2020-01-16 09:07:12 +0000 UTC,FinishedAt:2020-01-16 09:07:11 +0000 UTC,ContainerID:docker://ec0f8ca162ebeae86257273252544be11d4cd794f79b954a35811c8f6a3218c2,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:busybox:1.29 ImageID:docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 ContainerID:docker://ec0f8ca162ebeae86257273252544be11d4cd794f79b954a35811c8f6a3218c2 Started:<nil>} {Name:test-init-subpath-dynamicpv-zzht State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2020-01-16 09:07:37 +0000 UTC,FinishedAt:2020-01-16 09:07:36 +0000 UTC,ContainerID:docker://08ab8a648feb8ceafbed13a221e70059eb1d8544a8373133b30862aa8b96a9fd,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:docker://08ab8a648feb8ceafbed13a221e70059eb1d8544a8373133b30862aa8b96a9fd Started:<nil>}] ContainerStatuses:[{Name:test-container-subpath-dynamicpv-zzht State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2020-01-16 09:08:09 +0000 UTC,FinishedAt:2020-01-16 09:08:09 +0000 UTC,ContainerID:docker://8225b1cf1a6eac21951b05dd84e99eccadd0c7ee28df90ce37978e54211d6730,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:docker://8225b1cf1a6eac21951b05dd84e99eccadd0c7ee28df90ce37978e54211d6730 Started:0xc00236fcf9} {Name:test-container-volume-dynamicpv-zzht State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:ContainerCannotRun,Message:OCI runtime start failed: container process is already dead: unknown,StartedAt:2020-01-16 09:08:10 +0000 UTC,FinishedAt:2020-01-16 09:08:10 +0000 UTC,ContainerID:docker://d9d1abab8802709fbc485a6193464b58fab98f02fc5b4355aad42bfc9749ed01,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:docker://d9d1abab8802709fbc485a6193464b58fab98f02fc5b4355aad42bfc9749ed01 Started:0xc00236fcfa}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
          }
          expected pod "pod-subpath-test-dynamicpv-zzht" success: pod "pod-subpath-test-dynamicpv-zzht" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-16 09:08:01 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-16 09:06:54 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test-container-subpath-dynamicpv-zzht test-container-volume-dynamicpv-zzht]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-16 09:06:54 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test-container-subpath-dynamicpv-zzht test-container-volume-dynamicpv-zzht]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-16 09:06:54 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.6 PodIP:10.64.2.32 PodIPs:[{IP:10.64.2.32}] StartTime:2020-01-16 09:06:54 +0000 UTC InitContainerStatuses:[{Name:init-volume-dynamicpv-zzht State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2020-01-16 09:07:12 +0000 UTC,FinishedAt:2020-01-16 09:07:11 +0000 UTC,ContainerID:docker://ec0f8ca162ebeae86257273252544be11d4cd794f79b954a35811c8f6a3218c2,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:busybox:1.29 ImageID:docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 ContainerID:docker://ec0f8ca162ebeae86257273252544be11d4cd794f79b954a35811c8f6a3218c2 Started:<nil>} {Name:test-init-subpath-dynamicpv-zzht State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2020-01-16 09:07:37 +0000 UTC,FinishedAt:2020-01-16 09:07:36 +0000 UTC,ContainerID:docker://08ab8a648feb8ceafbed13a221e70059eb1d8544a8373133b30862aa8b96a9fd,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:docker://08ab8a648feb8ceafbed13a221e70059eb1d8544a8373133b30862aa8b96a9fd Started:<nil>}] ContainerStatuses:[{Name:test-container-subpath-dynamicpv-zzht State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2020-01-16 09:08:09 +0000 UTC,FinishedAt:2020-01-16 09:08:09 +0000 UTC,ContainerID:docker://8225b1cf1a6eac21951b05dd84e99eccadd0c7ee28df90ce37978e54211d6730,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:docker://8225b1cf1a6eac21951b05dd84e99eccadd0c7ee28df90ce37978e54211d6730 Started:0xc00236fcf9} {Name:test-container-volume-dynamicpv-zzht State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:ContainerCannotRun,Message:OCI runtime start failed: container process is already dead: unknown,StartedAt:2020-01-16 09:08:10 +0000 UTC,FinishedAt:2020-01-16 09:08:10 +0000 UTC,ContainerID:docker://d9d1abab8802709fbc485a6193464b58fab98f02fc5b4355aad42bfc9749ed01,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:docker://d9d1abab8802709fbc485a6193464b58fab98f02fc5b4355aad42bfc9749ed01 Started:0xc00236fcfa}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
      occurred

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:829
------------------------------
{"msg":"FAILED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":0,"skipped":5,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:09:31.382: INFO: Driver gluster doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:09:31.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 64 lines ...
• [SLOW TEST:12.155 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":42,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:09:32.179: INFO: Only supported for providers [openstack] (not gce)
... skipping 95 lines ...
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-txsb9 webserver-deployment-595b5b9587- deployment-5756 /api/v1/namespaces/deployment-5756/pods/webserver-deployment-595b5b9587-txsb9 8686ffa5-5a03-4264-b1d0-111f5b6accf2 7219 0 2020-01-16 09:09:26 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:595b5b9587] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e5707caa-a3f9-47e5-8835-52fffc4c0f0d 0xc0022dd8f0 0xc0022dd8f1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kwkmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kwkmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kwkmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-8mzr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 16 09:09:30.154: INFO: Pod "webserver-deployment-595b5b9587-w9jft" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-w9jft webserver-deployment-595b5b9587- deployment-5756 /api/v1/namespaces/deployment-5756/pods/webserver-deployment-595b5b9587-w9jft 10a315fd-66bc-45c3-805b-a09aeed18d03 6728 0 2020-01-16 09:09:09 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:595b5b9587] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e5707caa-a3f9-47e5-8835-52fffc4c0f0d 0xc0022dda00 0xc0022dda01}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kwkmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kwkmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kwkmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-8mzr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.3,PodIP:10.64.3.51,StartTime:2020-01-16 09:09:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-16 09:09:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4,ContainerID:docker://511b7daebc7f887a9f858c734a64dedfb3b88ff6803f05959fec4db1878de46d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.3.51,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 16 09:09:30.154: INFO: Pod "webserver-deployment-595b5b9587-zpgjt" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-zpgjt webserver-deployment-595b5b9587- deployment-5756 /api/v1/namespaces/deployment-5756/pods/webserver-deployment-595b5b9587-zpgjt d7e6e9ec-5d2f-4aa7-9956-4de147fb428f 7258 0 2020-01-16 09:09:26 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:595b5b9587] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e5707caa-a3f9-47e5-8835-52fffc4c0f0d 0xc0022ddbb0 0xc0022ddbb1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kwkmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kwkmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kwkmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-7fqk,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.4,PodIP:,StartTime:2020-01-16 09:09:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 16 09:09:30.155: INFO: Pod "webserver-deployment-c7997dcc8-6cg42" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6cg42 webserver-deployment-c7997dcc8- deployment-5756 /api/v1/namespaces/deployment-5756/pods/webserver-deployment-c7997dcc8-6cg42 56aa5c14-17c5-455a-b843-8538ed8f40d0 7184 0 2020-01-16 09:09:19 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ad789774-ec57-42b8-a899-f32c2a5e5325 0xc0022ddcf0 0xc0022ddcf1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kwkmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kwkmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kwkmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-8mzr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.3,PodIP:10.64.3.54,StartTime:2020-01-16 09:09:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: pull access denied for webserver, repository does not exist or may require 'docker login': denied: requested access to the resource is denied,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.3.54,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 16 09:09:30.155: INFO: Pod "webserver-deployment-c7997dcc8-7hbkz" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7hbkz webserver-deployment-c7997dcc8- deployment-5756 /api/v1/namespaces/deployment-5756/pods/webserver-deployment-c7997dcc8-7hbkz b3b2e526-c060-4d13-9c9a-6451c62edf68 7234 0 2020-01-16 09:09:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ad789774-ec57-42b8-a899-f32c2a5e5325 0xc0022ddea0 0xc0022ddea1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kwkmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kwkmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kwkmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-8mzr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.3,PodIP:10.64.3.55,StartTime:2020-01-16 09:09:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: pull access denied for webserver, repository does not exist or may require 'docker login': denied: requested access to the resource is denied,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.3.55,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 16 09:09:30.155: INFO: Pod "webserver-deployment-c7997dcc8-8ph2z" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8ph2z webserver-deployment-c7997dcc8- deployment-5756 /api/v1/namespaces/deployment-5756/pods/webserver-deployment-c7997dcc8-8ph2z 0e0b5559-e2a0-4a28-827b-0c7ce5b4c98e 7135 0 2020-01-16 09:09:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ad789774-ec57-42b8-a899-f32c2a5e5325 0xc0022ba0b0 0xc0022ba0b1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kwkmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kwkmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kwkmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-zb1j,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.5,PodIP:10.64.1.68,StartTime:2020-01-16 09:09:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: pull access denied for webserver, repository does not exist or may require 'docker login': denied: requested access to the resource is denied,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.1.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 16 09:09:30.155: INFO: Pod "webserver-deployment-c7997dcc8-bsjsp" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bsjsp webserver-deployment-c7997dcc8- deployment-5756 /api/v1/namespaces/deployment-5756/pods/webserver-deployment-c7997dcc8-bsjsp 6952bf0d-3f6b-4cae-832c-dccd919bed17 7148 0 2020-01-16 09:09:19 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ad789774-ec57-42b8-a899-f32c2a5e5325 0xc0022ba4f0 0xc0022ba4f1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kwkmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kwkmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kwkmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-zb1j,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.5,PodIP:10.64.1.67,StartTime:2020-01-16 09:09:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ImagePullBackOff,Message:Back-off pulling image "webserver:404",},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.1.67,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 16 09:09:30.156: INFO: Pod "webserver-deployment-c7997dcc8-cjdpq" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cjdpq webserver-deployment-c7997dcc8- deployment-5756 /api/v1/namespaces/deployment-5756/pods/webserver-deployment-c7997dcc8-cjdpq ed62c5eb-70d6-416e-bc46-713825b12dc3 7303 0 2020-01-16 09:09:26 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ad789774-ec57-42b8-a899-f32c2a5e5325 0xc0022ba7d0 0xc0022ba7d1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kwkmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kwkmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kwkmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-7fqk,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.4,PodIP:,StartTime:2020-01-16 09:09:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 16 09:09:30.156: INFO: Pod "webserver-deployment-c7997dcc8-d6424" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-d6424 webserver-deployment-c7997dcc8- deployment-5756 /api/v1/namespaces/deployment-5756/pods/webserver-deployment-c7997dcc8-d6424 db4a8e67-a107-43e9-b64e-2c9188deab69 7299 0 2020-01-16 09:09:26 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ad789774-ec57-42b8-a899-f32c2a5e5325 0xc0022bac20 0xc0022bac21}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kwkmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kwkmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kwkmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-8mzr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.3,PodIP:,StartTime:2020-01-16 09:09:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 16 09:09:30.156: INFO: Pod "webserver-deployment-c7997dcc8-dt68h" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dt68h webserver-deployment-c7997dcc8- deployment-5756 /api/v1/namespaces/deployment-5756/pods/webserver-deployment-c7997dcc8-dt68h d08f027b-b16f-45e7-bff4-2cc4ed3b5d05 7229 0 2020-01-16 09:09:26 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ad789774-ec57-42b8-a899-f32c2a5e5325 0xc0022bada0 0xc0022bada1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kwkmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kwkmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kwkmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-8mzr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 16 09:09:30.157: INFO: Pod "webserver-deployment-c7997dcc8-md9tg" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-md9tg webserver-deployment-c7997dcc8- deployment-5756 /api/v1/namespaces/deployment-5756/pods/webserver-deployment-c7997dcc8-md9tg 1ec8a5a5-6f10-41a3-a9b0-f509b11b393d 7227 0 2020-01-16 09:09:26 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ad789774-ec57-42b8-a899-f32c2a5e5325 0xc0022baf90 0xc0022baf91}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kwkmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kwkmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kwkmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-8mzr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 16 09:09:30.157: INFO: Pod "webserver-deployment-c7997dcc8-mgsgh" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mgsgh webserver-deployment-c7997dcc8- deployment-5756 /api/v1/namespaces/deployment-5756/pods/webserver-deployment-c7997dcc8-mgsgh 561cdf9b-ba06-4f58-ad43-ab5e23b1a944 7097 0 2020-01-16 09:09:19 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ad789774-ec57-42b8-a899-f32c2a5e5325 0xc0022bb2f0 0xc0022bb2f1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kwkmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kwkmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kwkmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-7fqk,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.4,PodIP:10.64.4.57,StartTime:2020-01-16 09:09:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: pull access denied for webserver, repository does not exist or may require 'docker login': denied: requested access to the resource is denied,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.4.57,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 16 09:09:30.157: INFO: Pod "webserver-deployment-c7997dcc8-mw2pj" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mw2pj webserver-deployment-c7997dcc8- deployment-5756 /api/v1/namespaces/deployment-5756/pods/webserver-deployment-c7997dcc8-mw2pj 9bcd1149-a23f-4602-81f3-5c9677b2d2f4 7241 0 2020-01-16 09:09:26 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ad789774-ec57-42b8-a899-f32c2a5e5325 0xc0022bb5b0 0xc0022bb5b1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kwkmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kwkmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kwkmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-8mzr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 16 09:09:30.157: INFO: Pod "webserver-deployment-c7997dcc8-rd5ch" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rd5ch webserver-deployment-c7997dcc8- deployment-5756 /api/v1/namespaces/deployment-5756/pods/webserver-deployment-c7997dcc8-rd5ch 0f680da1-4f75-4bf0-8dd2-63ac796e0783 7223 0 2020-01-16 09:09:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ad789774-ec57-42b8-a899-f32c2a5e5325 0xc0022bb6f0 0xc0022bb6f1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kwkmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kwkmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kwkmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-zb1j,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.5,PodIP:,StartTime:2020-01-16 09:09:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 16 09:09:30.162: INFO: Pod "webserver-deployment-c7997dcc8-s94wj" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-s94wj webserver-deployment-c7997dcc8- deployment-5756 /api/v1/namespaces/deployment-5756/pods/webserver-deployment-c7997dcc8-s94wj 54dd9b3d-00e8-468f-aeee-24dad495dc9c 7318 0 2020-01-16 09:09:26 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ad789774-ec57-42b8-a899-f32c2a5e5325 0xc0022bb860 0xc0022bb861}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kwkmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kwkmw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kwkmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-zb1j,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 09:09:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.5,PodIP:,StartTime:2020-01-16 09:09:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 8 lines ...
• [SLOW TEST:28.903 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:09:32.467: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:09:32.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 154 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:419
    should expand volume by restarting pod if attach=off, nodeExpansion=on
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:448
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":3,"skipped":5,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:09:38.364: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 6 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 36 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1054
    should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1099
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":4,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:09:46.010: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 37 lines ...
      Driver vsphere doesn't support ext3 -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:153
------------------------------
S
------------------------------
{"msg":"PASSED [sig-scheduling] PreemptionExecutionPath runs ReplicaSets to verify preemption running path","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:07:03.418: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-9919
... skipping 61 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and read from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":2,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:09:53.779: INFO: Driver local doesn't support ext4 -- skipping
... skipping 61 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":44,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 53 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":4,"skipped":11,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":2,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:08:56.610: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 61 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:09:59.074: INFO: Driver csi-hostpath doesn't support ext3 -- skipping
... skipping 89 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:10:01.172: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 239 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should scale a replication controller  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":6,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:10:01.618: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:10:01.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 45 lines ...
• [SLOW TEST:14.980 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:78.739 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":6,"skipped":32,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:10:14.151: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 177 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":3,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Service endpoints latency
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 740 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] provisioning
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should provision storage with pvc data source
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:214
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source","total":-1,"completed":5,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:21.529 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":6,"skipped":48,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:10:17.242: INFO: Only supported for providers [azure] (not gce)
... skipping 117 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:10:21.375: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:10:21.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 71 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":6,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 57 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":4,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:10:21.962: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:10:21.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 71 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":9,"failed":0}

SSSSS
------------------------------
[BeforeEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:9.061 seconds]
[k8s.io] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:10:24.147: INFO: Driver local doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:10:24.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 128 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":8,"skipped":53,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 109 lines ...
STEP: creating an object not containing a namespace with in-cluster config
Jan 16 09:10:09.435: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.211.53 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-6614 httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1'
Jan 16 09:10:12.425: INFO: rc: 255
STEP: trying to use kubectl with invalid token
Jan 16 09:10:12.425: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.211.53 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-6614 httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1'
Jan 16 09:10:14.904: INFO: rc: 255
Jan 16 09:10:14.904: INFO: got err error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.211.53 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-6614 httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1:
Command stdout:
I0116 09:10:14.599327     178 merged_client_builder.go:164] Using in-cluster namespace
I0116 09:10:14.601546     178 merged_client_builder.go:122] Using in-cluster configuration
I0116 09:10:14.606878     178 merged_client_builder.go:122] Using in-cluster configuration
I0116 09:10:14.618025     178 merged_client_builder.go:122] Using in-cluster configuration
I0116 09:10:14.618402     178 round_trippers.go:420] GET https://10.0.0.1:443/api/v1/namespaces/kubectl-6614/pods?limit=500
... skipping 8 lines ...
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}]
F0116 09:10:14.697162     178 helpers.go:114] error: You must be logged in to the server (Unauthorized)

stderr:
+ /tmp/kubectl get pods '--token=invalid' '--v=7'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid server
Jan 16 09:10:14.904: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.211.53 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-6614 httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1'
Jan 16 09:10:16.598: INFO: rc: 255
Jan 16 09:10:16.598: INFO: got err error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.211.53 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-6614 httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1:
Command stdout:
I0116 09:10:16.416603     190 merged_client_builder.go:164] Using in-cluster namespace
I0116 09:10:16.429942     190 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 12 milliseconds
I0116 09:10:16.430015     190 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0116 09:10:16.439480     190 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 8 milliseconds
I0116 09:10:16.439852     190 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0116 09:10:16.440104     190 shortcut.go:89] Error loading discovery information: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0116 09:10:16.444013     190 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 3 milliseconds
I0116 09:10:16.444180     190 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0116 09:10:16.450505     190 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 6 milliseconds
I0116 09:10:16.450796     190 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0116 09:10:16.453901     190 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 2 milliseconds
I0116 09:10:16.455017     190 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0116 09:10:16.455056     190 helpers.go:221] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
F0116 09:10:16.455192     190 helpers.go:114] Unable to connect to the server: dial tcp: lookup invalid on 10.0.0.10:53: no such host

stderr:
+ /tmp/kubectl get pods '--server=invalid' '--v=6'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid namespace
Jan 16 09:10:16.599: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.211.53 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-6614 httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1'
Jan 16 09:10:18.701: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n"
Jan 16 09:10:18.701: INFO: stdout: "I0116 09:10:18.336912     202 merged_client_builder.go:122] Using in-cluster configuration\nI0116 09:10:18.344091     202 merged_client_builder.go:122] Using in-cluster configuration\nI0116 09:10:18.356070     202 merged_client_builder.go:122] Using in-cluster configuration\nI0116 09:10:18.566649     202 round_trippers.go:443] GET https://10.0.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 210 milliseconds\nNo resources found in invalid namespace.\n"
Jan 16 09:10:18.701: INFO: stdout: I0116 09:10:18.336912     202 merged_client_builder.go:122] Using in-cluster configuration
... skipping 75 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:525
    should handle in-cluster config
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:769
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should handle in-cluster config","total":-1,"completed":9,"skipped":26,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 100 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":26,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:10:28.904: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 140 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a container with runAsUser
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:44
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:10:32.031: INFO: Distro gci doesn't support ntfs -- skipping
... skipping 226 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":17,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:10:33.652: INFO: Only supported for providers [aws] (not gce)
... skipping 174 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":52,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:10:34.512: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 54 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 15 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:10:37.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9229" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":6,"skipped":66,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:10:38.067: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:10:38.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 99 lines ...
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 16 09:10:32.649: INFO: Successfully updated pod "pod-update-activedeadlineseconds-c6ebd3d7-154a-48c0-bc4b-b97c4aa15929"
Jan 16 09:10:32.649: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-c6ebd3d7-154a-48c0-bc4b-b97c4aa15929" in namespace "pods-9394" to be "terminated due to deadline exceeded"
Jan 16 09:10:33.014: INFO: Pod "pod-update-activedeadlineseconds-c6ebd3d7-154a-48c0-bc4b-b97c4aa15929": Phase="Running", Reason="", readiness=true. Elapsed: 364.63654ms
Jan 16 09:10:35.294: INFO: Pod "pod-update-activedeadlineseconds-c6ebd3d7-154a-48c0-bc4b-b97c4aa15929": Phase="Running", Reason="", readiness=true. Elapsed: 2.644707432s
Jan 16 09:10:37.522: INFO: Pod "pod-update-activedeadlineseconds-c6ebd3d7-154a-48c0-bc4b-b97c4aa15929": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.872717907s
Jan 16 09:10:37.522: INFO: Pod "pod-update-activedeadlineseconds-c6ebd3d7-154a-48c0-bc4b-b97c4aa15929" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:10:37.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9394" for this suite.


• [SLOW TEST:15.673 seconds]
[k8s.io] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:10:38.179: INFO: Driver gluster doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:10:38.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 25 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:10:38.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json\"","total":-1,"completed":6,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:10:39.202: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 136 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:240
    should require VolumeAttach for drivers with attachment
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:262
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":3,"skipped":25,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:17.385 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":54,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:10:41.675: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:10:41.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 156 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl apply
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:922
    apply set/view last-applied
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:959
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":6,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:10:43.419: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:10:43.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 259 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  Granular Checks: Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:161
    should function for pod-Service: http
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:163
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for pod-Service: http","total":-1,"completed":5,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:10:44.169: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:10:44.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 94 lines ...
• [SLOW TEST:18.800 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":10,"skipped":28,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 27 lines ...
• [SLOW TEST:13.576 seconds]
[sig-storage] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":23,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:10:47.265: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 15 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":9,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:07:42.651: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-2511
... skipping 76 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume at the same time
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:243
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":9,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:10:50.579: INFO: Only supported for providers [azure] (not gce)
... skipping 50 lines ...
Jan 16 09:10:36.005: INFO: Trying to get logs from node bootstrap-e2e-minion-group-8mzr pod exec-volume-test-inlinevolume-z6zr container exec-container-inlinevolume-z6zr: <nil>
STEP: delete the pod
Jan 16 09:10:36.794: INFO: Waiting for pod exec-volume-test-inlinevolume-z6zr to disappear
Jan 16 09:10:37.018: INFO: Pod exec-volume-test-inlinevolume-z6zr no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-z6zr
Jan 16 09:10:37.018: INFO: Deleting pod "exec-volume-test-inlinevolume-z6zr" in namespace "volume-6225"
Jan 16 09:10:38.374: INFO: error deleting PD "bootstrap-e2e-c3915192-c90d-4d14-95b6-1bb40800f206": googleapi: Error 400: The disk resource 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/disks/bootstrap-e2e-c3915192-c90d-4d14-95b6-1bb40800f206' is already being used by 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-8mzr', resourceInUseByAnotherResource
Jan 16 09:10:38.374: INFO: Couldn't delete PD "bootstrap-e2e-c3915192-c90d-4d14-95b6-1bb40800f206", sleeping 5s: googleapi: Error 400: The disk resource 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/disks/bootstrap-e2e-c3915192-c90d-4d14-95b6-1bb40800f206' is already being used by 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-8mzr', resourceInUseByAnotherResource
Jan 16 09:10:44.554: INFO: error deleting PD "bootstrap-e2e-c3915192-c90d-4d14-95b6-1bb40800f206": googleapi: Error 400: The disk resource 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/disks/bootstrap-e2e-c3915192-c90d-4d14-95b6-1bb40800f206' is already being used by 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-8mzr', resourceInUseByAnotherResource
Jan 16 09:10:44.554: INFO: Couldn't delete PD "bootstrap-e2e-c3915192-c90d-4d14-95b6-1bb40800f206", sleeping 5s: googleapi: Error 400: The disk resource 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/disks/bootstrap-e2e-c3915192-c90d-4d14-95b6-1bb40800f206' is already being used by 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-8mzr', resourceInUseByAnotherResource
Jan 16 09:10:51.614: INFO: Successfully deleted PD "bootstrap-e2e-c3915192-c90d-4d14-95b6-1bb40800f206".
Jan 16 09:10:51.614: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:10:51.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-6225" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (ext4)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:10.952 seconds]
[sig-api-machinery] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34
  should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 137 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: inline ephemeral CSI volume] ephemeral
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support two pods which share the same volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:140
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should support two pods which share the same volume","total":-1,"completed":2,"skipped":28,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:10:58.242: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 63 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-auth] PodSecurityPolicy should forbid pod creation when no PSP is available","total":-1,"completed":7,"skipped":54,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:10:19.094: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 48 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":8,"skipped":54,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:01.322: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 30 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver emptydir doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 234 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : projected
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","total":-1,"completed":3,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:02.801: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 85 lines ...
• [SLOW TEST:24.481 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":7,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:03.694: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:11:03.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 88 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume at the same time
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:243
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":7,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:04.116: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:11:04.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 51 lines ...
[sig-storage] CSI Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver "csi-hostpath" does not support topology - skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:96
------------------------------
... skipping 91 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":6,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:05.946: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:11:05.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 163 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 70 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and write from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:233
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":3,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:06.203: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:11:06.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 77 lines ...
• [SLOW TEST:11.674 seconds]
[k8s.io] [sig-node] Security Context
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:88
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":44,"failed":0}

S
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]","total":-1,"completed":7,"skipped":45,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:06.864: INFO: Only supported for providers [vsphere] (not gce)
... skipping 166 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:43
    files with FSGroup ownership should support (root,0644,tmpfs)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:62
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":9,"skipped":72,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:12.802: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:11:12.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 12 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":6,"skipped":16,"failed":0}
[BeforeEach] [k8s.io] [sig-node] SSH
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:11:01.586: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename ssh
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in ssh-1608
... skipping 10 lines ...
Jan 16 09:11:06.043: INFO: Got stdout from 35.197.95.20:22: Hello from prow@bootstrap-e2e-minion-group-zb1j
STEP: SSH'ing to 1 nodes and running echo "foo" | grep "bar"
STEP: SSH'ing to 1 nodes and running echo "stdout" && echo "stderr" >&2 && exit 7
Jan 16 09:11:07.186: INFO: Got stdout from 35.247.44.158:22: stdout
Jan 16 09:11:07.186: INFO: Got stderr from 35.247.44.158:22: stderr
STEP: SSH'ing to a nonexistent host
error dialing prow@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying
[AfterEach] [k8s.io] [sig-node] SSH
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:11:12.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ssh-1608" for this suite.


• [SLOW TEST:11.386 seconds]
[k8s.io] [sig-node] SSH
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should SSH to all nodes and run commands
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] SSH should SSH to all nodes and run commands","total":-1,"completed":7,"skipped":16,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
... skipping 58 lines ...
Jan 16 09:10:15.605: INFO: PersistentVolumeClaim csi-hostpathvqjqv found but phase is Pending instead of Bound.
Jan 16 09:10:17.928: INFO: PersistentVolumeClaim csi-hostpathvqjqv found but phase is Pending instead of Bound.
Jan 16 09:10:20.120: INFO: PersistentVolumeClaim csi-hostpathvqjqv found but phase is Pending instead of Bound.
Jan 16 09:10:22.258: INFO: PersistentVolumeClaim csi-hostpathvqjqv found and phase=Bound (16.171606687s)
STEP: Expanding non-expandable pvc
Jan 16 09:10:22.586: INFO: currentPvcSize {{1048576 0} {<nil>} 1Mi BinarySI}, newSize {{1074790400 0} {<nil>}  BinarySI}
Jan 16 09:10:22.966: INFO: Error updating pvc csi-hostpathvqjqv: persistentvolumeclaims "csi-hostpathvqjqv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:10:25.536: INFO: Error updating pvc csi-hostpathvqjqv: persistentvolumeclaims "csi-hostpathvqjqv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:10:27.250: INFO: Error updating pvc csi-hostpathvqjqv: persistentvolumeclaims "csi-hostpathvqjqv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:10:29.418: INFO: Error updating pvc csi-hostpathvqjqv: persistentvolumeclaims "csi-hostpathvqjqv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:10:31.445: INFO: Error updating pvc csi-hostpathvqjqv: persistentvolumeclaims "csi-hostpathvqjqv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:10:33.526: INFO: Error updating pvc csi-hostpathvqjqv: persistentvolumeclaims "csi-hostpathvqjqv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:10:35.513: INFO: Error updating pvc csi-hostpathvqjqv: persistentvolumeclaims "csi-hostpathvqjqv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:10:37.391: INFO: Error updating pvc csi-hostpathvqjqv: persistentvolumeclaims "csi-hostpathvqjqv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:10:39.716: INFO: Error updating pvc csi-hostpathvqjqv: persistentvolumeclaims "csi-hostpathvqjqv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:10:41.462: INFO: Error updating pvc csi-hostpathvqjqv: persistentvolumeclaims "csi-hostpathvqjqv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:10:43.536: INFO: Error updating pvc csi-hostpathvqjqv: persistentvolumeclaims "csi-hostpathvqjqv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:10:45.450: INFO: Error updating pvc csi-hostpathvqjqv: persistentvolumeclaims "csi-hostpathvqjqv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:10:47.133: INFO: Error updating pvc csi-hostpathvqjqv: persistentvolumeclaims "csi-hostpathvqjqv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:10:49.332: INFO: Error updating pvc csi-hostpathvqjqv: persistentvolumeclaims "csi-hostpathvqjqv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:10:51.546: INFO: Error updating pvc csi-hostpathvqjqv: persistentvolumeclaims "csi-hostpathvqjqv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:10:53.550: INFO: Error updating pvc csi-hostpathvqjqv: persistentvolumeclaims "csi-hostpathvqjqv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 09:10:53.809: INFO: Error updating pvc csi-hostpathvqjqv: persistentvolumeclaims "csi-hostpathvqjqv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Jan 16 09:10:53.810: INFO: Deleting PersistentVolumeClaim "csi-hostpathvqjqv"
Jan 16 09:10:54.040: INFO: Waiting up to 5m0s for PersistentVolume pvc-dcd653cb-71af-4aa7-a0e9-d7df578b1fbc to get deleted
Jan 16 09:10:54.220: INFO: PersistentVolume pvc-dcd653cb-71af-4aa7-a0e9-d7df578b1fbc found and phase=Bound (180.710374ms)
Jan 16 09:10:59.636: INFO: PersistentVolume pvc-dcd653cb-71af-4aa7-a0e9-d7df578b1fbc was removed
STEP: Deleting sc
... skipping 58 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] volume-expand
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:147
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":8,"skipped":62,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:13.246: INFO: Driver azure-disk doesn't support ntfs -- skipping
... skipping 15 lines ...
      Driver azure-disk doesn't support ntfs -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:153
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":4,"skipped":38,"failed":0}
[BeforeEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:10:16.365: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-2141
... skipping 16 lines ...
• [SLOW TEST:57.218 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove pods when job is deleted
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:73
------------------------------
{"msg":"PASSED [sig-apps] Job should remove pods when job is deleted","total":-1,"completed":5,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:13.592: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 65 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":6,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:13.926: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:11:13.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 88 lines ...
• [SLOW TEST:24.362 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":4,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:14.948: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 15 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass","total":-1,"completed":3,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:06:42.794: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 114 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":4,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:16.707: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 43 lines ...
• [SLOW TEST:14.335 seconds]
[k8s.io] PrivilegedPod [NodeConformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should enable privileged commands [LinuxOnly]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:49
------------------------------
{"msg":"PASSED [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":4,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:20.556: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 114 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  Granular Checks: Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:161
    should function for endpoint-Service: udp
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:208
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp","total":-1,"completed":9,"skipped":53,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:20.953: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 178 lines ...
• [SLOW TEST:34.344 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":7,"skipped":33,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:21.632: INFO: Driver vsphere doesn't support ext3 -- skipping
... skipping 54 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: nfs]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver "nfs" does not support topology - skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:96
------------------------------
... skipping 11 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver hostPath doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 79 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:43
    volume on default medium should have the correct mode using FSGroup
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:66
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":8,"skipped":19,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":52,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:10:57.328: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-2599
... skipping 29 lines ...
• [SLOW TEST:28.525 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":7,"skipped":54,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:10:32.625: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 62 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":8,"skipped":54,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:26.089: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:11:26.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 48 lines ...
• [SLOW TEST:15.131 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":25,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 59 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and write from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:233
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":4,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:61.259 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should support orphan deletion of custom resources
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:973
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support orphan deletion of custom resources","total":-1,"completed":5,"skipped":42,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:30.253: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 30 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 28 lines ...
• [SLOW TEST:9.917 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":22,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":8,"skipped":52,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:11:25.862: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-1219
... skipping 17 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl cluster-info
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1130
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":-1,"completed":9,"skipped":52,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:31.287: INFO: Driver cinder doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:11:31.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 56 lines ...
Jan 16 09:11:21.787: INFO: Pod exec-volume-test-preprovisionedpv-xzsg no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-xzsg
Jan 16 09:11:21.787: INFO: Deleting pod "exec-volume-test-preprovisionedpv-xzsg" in namespace "volume-4103"
STEP: Deleting pv and pvc
Jan 16 09:11:22.066: INFO: Deleting PersistentVolumeClaim "pvc-mcbjh"
Jan 16 09:11:22.385: INFO: Deleting PersistentVolume "gcepd-zznlw"
Jan 16 09:11:24.162: INFO: error deleting PD "bootstrap-e2e-b31702f5-347d-46a7-a201-a101287b3f07": googleapi: Error 400: The disk resource 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/disks/bootstrap-e2e-b31702f5-347d-46a7-a201-a101287b3f07' is already being used by 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-8mzr', resourceInUseByAnotherResource
Jan 16 09:11:24.162: INFO: Couldn't delete PD "bootstrap-e2e-b31702f5-347d-46a7-a201-a101287b3f07", sleeping 5s: googleapi: Error 400: The disk resource 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/disks/bootstrap-e2e-b31702f5-347d-46a7-a201-a101287b3f07' is already being used by 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-8mzr', resourceInUseByAnotherResource
Jan 16 09:11:31.258: INFO: Successfully deleted PD "bootstrap-e2e-b31702f5-347d-46a7-a201-a101287b3f07".
Jan 16 09:11:31.258: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:11:31.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-4103" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":7,"skipped":25,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:19.334 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: enough pods, absolute => should allow an eviction
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:151
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, absolute =\u003e should allow an eviction","total":-1,"completed":5,"skipped":18,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:34.300: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 135 lines ...
• [SLOW TEST:14.519 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:38.785: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 108 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1527
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":7,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:38.803: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:11:38.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 158 lines ...
• [SLOW TEST:17.322 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":8,"skipped":55,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:39.063: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 6 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 21 lines ...
• [SLOW TEST:12.009 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: maxUnavailable deny evictions, integer => should not allow an eviction
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:151
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer =\u003e should not allow an eviction","total":-1,"completed":8,"skipped":27,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:41.102: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 177 lines ...
• [SLOW TEST:16.883 seconds]
[sig-storage] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:46.976: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 146 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and read from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":2,"skipped":16,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory"]}

S
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":7,"skipped":73,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:10:15.066: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 137 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":8,"skipped":73,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:52.520: INFO: Only supported for providers [azure] (not gce)
... skipping 182 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":8,"skipped":28,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 34 lines ...
• [SLOW TEST:19.178 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":6,"skipped":26,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 63 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and write from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:233
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":6,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:11:59.203: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:11:59.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 212 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should do a rolling update of a replication controller  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":-1,"completed":11,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:12:01.976: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:12:01.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 3 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 191 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 49 lines ...
• [SLOW TEST:37.950 seconds]
[k8s.io] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":58,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:12:04.058: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:12:04.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 39 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when running a container with a new image
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:263
      should be able to pull from private registry with secret [NodeConformance]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:385
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":9,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:104.929 seconds]
[sig-storage] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:12:09.107: INFO: Driver hostPath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:12:09.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 39 lines ...
• [SLOW TEST:95.119 seconds]
[sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should schedule multiple jobs concurrently
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:61
------------------------------
{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently","total":-1,"completed":7,"skipped":76,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:12:13.264: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 69 lines ...
Jan 16 09:12:03.574: INFO: Pod exec-volume-test-preprovisionedpv-kbhp no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-kbhp
Jan 16 09:12:03.574: INFO: Deleting pod "exec-volume-test-preprovisionedpv-kbhp" in namespace "volume-2730"
STEP: Deleting pv and pvc
Jan 16 09:12:04.106: INFO: Deleting PersistentVolumeClaim "pvc-qxclr"
Jan 16 09:12:04.548: INFO: Deleting PersistentVolume "gcepd-st9tq"
Jan 16 09:12:06.026: INFO: error deleting PD "bootstrap-e2e-063236fd-cabc-4662-a81a-c747f90ece5b": googleapi: Error 400: The disk resource 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/disks/bootstrap-e2e-063236fd-cabc-4662-a81a-c747f90ece5b' is already being used by 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-zb1j', resourceInUseByAnotherResource
Jan 16 09:12:06.026: INFO: Couldn't delete PD "bootstrap-e2e-063236fd-cabc-4662-a81a-c747f90ece5b", sleeping 5s: googleapi: Error 400: The disk resource 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/disks/bootstrap-e2e-063236fd-cabc-4662-a81a-c747f90ece5b' is already being used by 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-zb1j', resourceInUseByAnotherResource
Jan 16 09:12:13.169: INFO: Successfully deleted PD "bootstrap-e2e-063236fd-cabc-4662-a81a-c747f90ece5b".
Jan 16 09:12:13.169: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:12:13.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-2730" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":8,"skipped":50,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 61 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume one after the other
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":5,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:16.037 seconds]
[k8s.io] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":52,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:12:15.292: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:12:15.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 3 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 51 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":88,"failed":0}
[BeforeEach] [sig-windows] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28
Jan 16 09:12:19.110: INFO: Only supported for node OS distro [windows] (not gci)
[AfterEach] [sig-windows] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:12:19.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 77 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":4,"skipped":54,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:12:22.209: INFO: Driver cinder doesn't support ntfs -- skipping
... skipping 47 lines ...
• [SLOW TEST:11.489 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:12:26.089: INFO: Only supported for providers [openstack] (not gce)
... skipping 105 lines ...
• [SLOW TEST:12.176 seconds]
[sig-storage] EmptyDir wrapper volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":8,"skipped":54,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:12:27.478: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:12:27.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 60 lines ...
• [SLOW TEST:48.662 seconds]
[sig-storage] PVC Protection
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:137
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":-1,"completed":8,"skipped":32,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:12:27.493: INFO: Driver azure-disk doesn't support ntfs -- skipping
... skipping 253 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should perform rolling updates and roll backs of template modifications [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:12:28.038: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 145 lines ...
STEP: Building a namespace api object, basename container-runtime
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-5622
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 16 09:12:28.183: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
... skipping 31 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":89,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:12:29.811: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:12:29.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 99 lines ...
Jan 16 09:11:17.995: INFO: stdout: "NAMESPACE      NAME                AGE   REQUEST     LIMIT\nkubectl-2557   rq1namehgbzcc6kgv   1s    cpu: 0/5M   \n"
Jan 16 09:11:19.090: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.211.53 --kubeconfig=/workspace/.kube/config get persistentvolumes --all-namespaces'
Jan 16 09:11:19.945: INFO: stderr: ""
Jan 16 09:11:19.946: INFO: stdout: "NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                          STORAGECLASS                                                        REASON   AGE\ngcepd-zznlw                                2Gi        RWO            Retain           Bound       volume-4103/pvc-mcbjh                          volume-4103                                                                  18s\nlocal-pvnn4gc                              2Gi        RWO            Retain           Released    persistent-local-volumes-test-1435/pvc-x4vft   local-volume-test-storageclass-persistent-local-volumes-test-1435            2m11s\nlocal-pvq9whj                              2Gi        RWO            Retain           Bound       persistent-local-volumes-test-7270/pvc-dj8ft   local-volume-test-storageclass-persistent-local-volumes-test-7270            17s\nlocal-qrgnm                                2Gi        RWO            Retain           Bound       provisioning-5143/pvc-d7ngh                    provisioning-5143                                                            32s\npv1namehgbzcc6kgv                          3M         RWO            Retain           Available                                                                                                                               1s\npvc-3bf18753-d8e0-4462-b544-15f817d42488   1Mi        RWO            Delete           Released    provisioning-4867/csi-hostpathhmm44            provisioning-4867-csi-hostpath-provisioning-4867-scrcv9q                     48s\npvc-83fd9abc-a84e-4314-8567-027cac1839a5   5Gi        RWO            Delete           Released    volume-201/gcepdzwrpl                          volume-201-gcepd-scqbdcr                                                     110s\n"
Jan 16 09:11:21.133: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.211.53 --kubeconfig=/workspace/.kube/config get events --all-namespaces'
Jan 16 09:11:23.203: INFO: stderr: ""
Jan 16 09:11:23.203: INFO: stdout: "NAMESPACE                            LAST SEEN   TYPE      REASON                    OBJECT                                                                      MESSAGE\nconfigmap-6387                       20s         Normal    Scheduled                 pod/pod-configmaps-3b2e7f8a-0d48-44e7-ae38-dfd5a4a08d70                     Successfully assigned configmap-6387/pod-configmaps-3b2e7f8a-0d48-44e7-ae38-dfd5a4a08d70 to bootstrap-e2e-minion-group-8mzr\nconfigmap-6387                       18s         Normal    Pulled                    pod/pod-configmaps-3b2e7f8a-0d48-44e7-ae38-dfd5a4a08d70                     Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nconfigmap-6387                       18s         Normal    Created                   pod/pod-configmaps-3b2e7f8a-0d48-44e7-ae38-dfd5a4a08d70                     Created container configmap-volume-test\nconfigmap-6387                       18s         Normal    Started                   pod/pod-configmaps-3b2e7f8a-0d48-44e7-ae38-dfd5a4a08d70                     Started container configmap-volume-test\nconfigmap-7482                       62s         Normal    Scheduled                 pod/pod-configmaps-87de55ff-9f02-4a27-bf07-2ea862ac83da                     Successfully assigned configmap-7482/pod-configmaps-87de55ff-9f02-4a27-bf07-2ea862ac83da to bootstrap-e2e-minion-group-zb1j\nconfigmap-7482                       59s         Normal    Pulled                    pod/pod-configmaps-87de55ff-9f02-4a27-bf07-2ea862ac83da                     Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nconfigmap-7482                       59s         Normal    Created                   pod/pod-configmaps-87de55ff-9f02-4a27-bf07-2ea862ac83da                     Created container delcm-volume-test\nconfigmap-7482                       58s         Normal    Started                   pod/pod-configmaps-87de55ff-9f02-4a27-bf07-2ea862ac83da                     Started container delcm-volume-test\nconfigmap-7482                       58s         Normal    Pulled                    pod/pod-configmaps-87de55ff-9f02-4a27-bf07-2ea862ac83da                     Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nconfigmap-7482                       57s         Normal    Created                   pod/pod-configmaps-87de55ff-9f02-4a27-bf07-2ea862ac83da                     Created container updcm-volume-test\nconfigmap-7482                       56s         Normal    Started                   pod/pod-configmaps-87de55ff-9f02-4a27-bf07-2ea862ac83da                     Started container updcm-volume-test\nconfigmap-7482                       56s         Normal    Pulled                    pod/pod-configmaps-87de55ff-9f02-4a27-bf07-2ea862ac83da                     Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nconfigmap-7482                       56s         Normal    Created                   pod/pod-configmaps-87de55ff-9f02-4a27-bf07-2ea862ac83da                     Created container createcm-volume-test\nconfigmap-7482                       55s         Normal    Started                   pod/pod-configmaps-87de55ff-9f02-4a27-bf07-2ea862ac83da                     Started container createcm-volume-test\ncontainer-probe-3084                 52s         Normal    Scheduled                 pod/test-webserver-e3ce9e01-41e0-49b4-89c7-7504342f3cc1                     Successfully assigned container-probe-3084/test-webserver-e3ce9e01-41e0-49b4-89c7-7504342f3cc1 to bootstrap-e2e-minion-group-zb1j\ncontainer-probe-3084                 47s         Normal    Pulled                    pod/test-webserver-e3ce9e01-41e0-49b4-89c7-7504342f3cc1                     Container image \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\" already present on machine\ncontainer-probe-3084                 46s         Normal    Created                   pod/test-webserver-e3ce9e01-41e0-49b4-89c7-7504342f3cc1                     Created container test-webserver\ncontainer-probe-3084                 46s         Normal    Started                   pod/test-webserver-e3ce9e01-41e0-49b4-89c7-7504342f3cc1                     Started container test-webserver\ncronjob-771                          13s         Normal    Scheduled                 pod/concurrent-1579165860-skldd                                             Successfully assigned cronjob-771/concurrent-1579165860-skldd to bootstrap-e2e-minion-group-8mzr\ncronjob-771                          11s         Warning   FailedMount               pod/concurrent-1579165860-skldd                                             MountVolume.SetUp failed for volume \"default-token-6vdwn\" : failed to sync secret cache: timed out waiting for the condition\ncronjob-771                          10s         Normal    Pulled                    pod/concurrent-1579165860-skldd                                             Container image \"docker.io/library/busybox:1.29\" already present on machine\ncronjob-771                          10s         Normal    Created                   pod/concurrent-1579165860-skldd                                             Created container c\ncronjob-771                          9s          Normal    Started                   pod/concurrent-1579165860-skldd                                             Started container c\ncronjob-771                          13s         Normal    SuccessfulCreate          job/concurrent-1579165860                                                   Created pod: concurrent-1579165860-skldd\ncronjob-771                          13s         Normal    SuccessfulCreate          cronjob/concurrent                                                          Created job concurrent-1579165860\ncsi-mock-volumes-1747                96s         Warning   FailedMount               pod/csi-mockplugin-0                                                        MountVolume.SetUp failed for volume \"csi-mock-token-fp6s2\" : failed to sync secret cache: timed out waiting for the condition\ncsi-mock-volumes-1747                94s         Normal    Pulled                    pod/csi-mockplugin-0                                                        Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\ncsi-mock-volumes-1747                94s         Normal    Created                   pod/csi-mockplugin-0                                                        Created container csi-provisioner\ncsi-mock-volumes-1747                94s         Normal    Started                   pod/csi-mockplugin-0                                                        Started container csi-provisioner\ncsi-mock-volumes-1747                94s         Normal    Pulled                    pod/csi-mockplugin-0                                                        Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\ncsi-mock-volumes-1747                94s         Normal    Created                   pod/csi-mockplugin-0                                                        Created container driver-registrar\ncsi-mock-volumes-1747                94s         Normal    Started                   pod/csi-mockplugin-0                                                        Started container driver-registrar\ncsi-mock-volumes-1747                94s         Normal    Pulled                    pod/csi-mockplugin-0                                                        Container image \"quay.io/k8scsi/mock-driver:v2.1.0\" already present on machine\ncsi-mock-volumes-1747                94s         Normal    Created                   pod/csi-mockplugin-0                                                        Created container mock\ncsi-mock-volumes-1747                93s         Normal    Started                   pod/csi-mockplugin-0                                                        Started container mock\ncsi-mock-volumes-1747                43s         Normal    Killing                   pod/csi-mockplugin-0                                                        Stopping container csi-provisioner\ncsi-mock-volumes-1747                43s         Normal    Killing                   pod/csi-mockplugin-0                                                        Stopping container mock\ncsi-mock-volumes-1747                43s         Normal    Killing                   pod/csi-mockplugin-0                                                        Stopping container driver-registrar\ncsi-mock-volumes-1747                41s         Normal    Pulled                    pod/csi-mockplugin-attacher-0                                               Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\ncsi-mock-volumes-1747                41s         Normal    Created                   pod/csi-mockplugin-attacher-0                                               Created container csi-attacher\ncsi-mock-volumes-1747                41s         Normal    Started                   pod/csi-mockplugin-attacher-0                                               Started container csi-attacher\ncsi-mock-volumes-1747                97s         Normal    SuccessfulCreate          statefulset/csi-mockplugin-attacher                                         create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\ncsi-mock-volumes-1747                97s         Normal    SuccessfulCreate          statefulset/csi-mockplugin                                                  create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-1747                95s         Normal    ExternalProvisioning      persistentvolumeclaim/pvc-hxg9p                                             waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-1747\" or manually created by system administrator\ncsi-mock-volumes-1747                92s         Normal    Provisioning              persistentvolumeclaim/pvc-hxg9p                                             External provisioner is provisioning volume for claim \"csi-mock-volumes-1747/pvc-hxg9p\"\ncsi-mock-volumes-1747                92s         Normal    ProvisioningSucceeded     persistentvolumeclaim/pvc-hxg9p                                             Successfully provisioned volume pvc-a96af452-209b-46f0-b1f2-fb12046b03a2\ncsi-mock-volumes-1747                87s         Warning   FailedMount               pod/pvc-volume-tester-pwwhk                                                 Unable to attach or mount volumes: unmounted volumes=[my-volume default-token-r9d6z], unattached volumes=[my-volume default-token-r9d6z]: error processing PVC csi-mock-volumes-1747/pvc-hxg9p: failed to fetch PVC from API server: persistentvolumeclaims \"pvc-hxg9p\" is forbidden: User \"system:node:bootstrap-e2e-minion-group-8mzr\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"csi-mock-volumes-1747\": no relationship found between node \"bootstrap-e2e-minion-group-8mzr\" and this object\ncsi-mock-volumes-1747                86s         Normal    SuccessfulAttachVolume    pod/pvc-volume-tester-pwwhk                                                 AttachVolume.Attach succeeded for volume \"pvc-a96af452-209b-46f0-b1f2-fb12046b03a2\"\ncsi-mock-volumes-1747                72s         Normal    Pulled                    pod/pvc-volume-tester-pwwhk                                                 Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-1747                72s         Normal    Created                   pod/pvc-volume-tester-pwwhk                                                 Created container volume-tester\ncsi-mock-volumes-1747                72s         Normal    Started                   pod/pvc-volume-tester-pwwhk                                                 Started container volume-tester\ncsi-mock-volumes-1747                67s         Normal    Killing                   pod/pvc-volume-tester-pwwhk                                                 Stopping container volume-tester\ndefault                              8m35s       Normal    RegisteredNode            node/bootstrap-e2e-master                                                   Node bootstrap-e2e-master event: Registered Node bootstrap-e2e-master in Controller\ndefault                              8m33s       Normal    Starting                  node/bootstrap-e2e-minion-group-451g                                        Starting kubelet.\ndefault                              8m32s       Normal    NodeHasSufficientMemory   node/bootstrap-e2e-minion-group-451g                                        Node bootstrap-e2e-minion-group-451g status is now: NodeHasSufficientMemory\ndefault                              8m32s       Normal    NodeHasNoDiskPressure     node/bootstrap-e2e-minion-group-451g                                        Node bootstrap-e2e-minion-group-451g status is now: NodeHasNoDiskPressure\ndefault                              8m32s       Normal    NodeHasSufficientPID      node/bootstrap-e2e-minion-group-451g                                        Node bootstrap-e2e-minion-group-451g status is now: NodeHasSufficientPID\ndefault                              8m32s       Normal    NodeAllocatableEnforced   node/bootstrap-e2e-minion-group-451g                                        Updated Node Allocatable limit across pods\ndefault                              8m32s       Warning   ContainerdStart           node/bootstrap-e2e-minion-group-451g                                        Starting containerd container runtime...\ndefault                              8m32s       Warning   DockerStart               node/bootstrap-e2e-minion-group-451g                                        Starting Docker Application Container Engine...\ndefault                              8m32s       Warning   KubeletStart              node/bootstrap-e2e-minion-group-451g                                        Started Kubernetes kubelet.\ndefault                              8m31s       Normal    NodeReady                 node/bootstrap-e2e-minion-group-451g                                        Node bootstrap-e2e-minion-group-451g status is now: NodeReady\ndefault                              8m30s       Normal    Starting                  node/bootstrap-e2e-minion-group-451g                                        Starting kube-proxy.\ndefault                              8m30s       Normal    RegisteredNode            node/bootstrap-e2e-minion-group-451g                                        Node bootstrap-e2e-minion-group-451g event: Registered Node bootstrap-e2e-minion-group-451g in Controller\ndefault                              8m32s       Normal    Starting                  node/bootstrap-e2e-minion-group-7fqk                                        Starting kubelet.\ndefault                              8m31s       Normal    NodeHasSufficientMemory   node/bootstrap-e2e-minion-group-7fqk                                        Node bootstrap-e2e-minion-group-7fqk status is now: NodeHasSufficientMemory\ndefault                              8m31s       Normal    NodeHasNoDiskPressure     node/bootstrap-e2e-minion-group-7fqk                                        Node bootstrap-e2e-minion-group-7fqk status is now: NodeHasNoDiskPressure\ndefault                              8m31s       Normal    NodeHasSufficientPID      node/bootstrap-e2e-minion-group-7fqk                                        Node bootstrap-e2e-minion-group-7fqk status is now: NodeHasSufficientPID\ndefault                              8m31s       Normal    NodeAllocatableEnforced   node/bootstrap-e2e-minion-group-7fqk                                        Updated Node Allocatable limit across pods\ndefault                              8m31s       Normal    NodeReady                 node/bootstrap-e2e-minion-group-7fqk                                        Node bootstrap-e2e-minion-group-7fqk status is now: NodeReady\ndefault                              8m30s       Warning   ContainerdStart           node/bootstrap-e2e-minion-group-7fqk                                        Starting containerd container runtime...\ndefault                              8m30s       Warning   DockerStart               node/bootstrap-e2e-minion-group-7fqk                                        Starting Docker Application Container Engine...\ndefault                              8m30s       Warning   KubeletStart              node/bootstrap-e2e-minion-group-7fqk                                        Started Kubernetes kubelet.\ndefault                              8m30s       Normal    RegisteredNode            node/bootstrap-e2e-minion-group-7fqk                                        Node bootstrap-e2e-minion-group-7fqk event: Registered Node bootstrap-e2e-minion-group-7fqk in Controller\ndefault                              8m29s       Normal    Starting                  node/bootstrap-e2e-minion-group-7fqk                                        Starting kube-proxy.\ndefault                              8m32s       Normal    Starting                  node/bootstrap-e2e-minion-group-8mzr                                        Starting kubelet.\ndefault                              8m32s       Normal    NodeHasSufficientMemory   node/bootstrap-e2e-minion-group-8mzr                                        Node bootstrap-e2e-minion-group-8mzr status is now: NodeHasSufficientMemory\ndefault                              8m32s       Normal    NodeHasNoDiskPressure     node/bootstrap-e2e-minion-group-8mzr                                        Node bootstrap-e2e-minion-group-8mzr status is now: NodeHasNoDiskPressure\ndefault                              8m32s       Normal    NodeHasSufficientPID      node/bootstrap-e2e-minion-group-8mzr                                        Node bootstrap-e2e-minion-group-8mzr status is now: NodeHasSufficientPID\ndefault                              8m32s       Normal    NodeAllocatableEnforced   node/bootstrap-e2e-minion-group-8mzr                                        Updated Node Allocatable limit across pods\ndefault                              8m31s       Warning   ContainerdStart           node/bootstrap-e2e-minion-group-8mzr                                        Starting containerd container runtime...\ndefault                              8m31s       Warning   DockerStart               node/bootstrap-e2e-minion-group-8mzr                                        Starting Docker Application Container Engine...\ndefault                              8m31s       Warning   KubeletStart              node/bootstrap-e2e-minion-group-8mzr                                        Started Kubernetes kubelet.\ndefault                              8m30s       Normal    RegisteredNode            node/bootstrap-e2e-minion-group-8mzr                                        Node bootstrap-e2e-minion-group-8mzr event: Registered Node bootstrap-e2e-minion-group-8mzr in Controller\ndefault                              8m30s       Normal    Starting                  node/bootstrap-e2e-minion-group-8mzr                                        Starting kube-proxy.\ndefault                              8m21s       Normal    NodeReady                 node/bootstrap-e2e-minion-group-8mzr                                        Node bootstrap-e2e-minion-group-8mzr status is now: NodeReady\ndefault                              8m33s       Normal    Starting                  node/bootstrap-e2e-minion-group-zb1j                                        Starting kubelet.\ndefault                              8m33s       Normal    NodeHasSufficientMemory   node/bootstrap-e2e-minion-group-zb1j                                        Node bootstrap-e2e-minion-group-zb1j status is now: NodeHasSufficientMemory\ndefault                              8m33s       Normal    NodeHasNoDiskPressure     node/bootstrap-e2e-minion-group-zb1j                                        Node bootstrap-e2e-minion-group-zb1j status is now: NodeHasNoDiskPressure\ndefault                              8m33s       Normal    NodeHasSufficientPID      node/bootstrap-e2e-minion-group-zb1j                                        Node bootstrap-e2e-minion-group-zb1j status is now: NodeHasSufficientPID\ndefault                              8m33s       Normal    NodeAllocatableEnforced   node/bootstrap-e2e-minion-group-zb1j                                        Updated Node Allocatable limit across pods\ndefault                              8m31s       Normal    NodeReady                 node/bootstrap-e2e-minion-group-zb1j                                        Node bootstrap-e2e-minion-group-zb1j status is now: NodeReady\ndefault                              8m31s       Normal    Starting                  node/bootstrap-e2e-minion-group-zb1j                                        Starting kube-proxy.\ndefault                              8m30s       Normal    RegisteredNode            node/bootstrap-e2e-minion-group-zb1j                                        Node bootstrap-e2e-minion-group-zb1j event: Registered Node bootstrap-e2e-minion-group-zb1j in Controller\ndefault                              8m30s       Warning   ContainerdStart           node/bootstrap-e2e-minion-group-zb1j                                        Starting containerd container runtime...\ndefault                              8m30s       Warning   DockerStart               node/bootstrap-e2e-minion-group-zb1j                                        Starting Docker Application Container Engine...\ndefault                              8m30s       Warning   KubeletStart              node/bootstrap-e2e-minion-group-zb1j                                        Started Kubernetes kubelet.\ndefault                              4m24s       Normal    VolumeDelete              persistentvolume/pvc-2a04f730-527d-4ea3-86c7-f4fc8676f06f                   googleapi: Error 400: The disk resource 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-2a04f730-527d-4ea3-86c7-f4fc8676f06f' is already being used by 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-zb1j', resourceInUseByAnotherResource\ndefault                              2m14s       Normal    VolumeDelete              persistentvolume/pvc-7626b4fc-937b-4365-9347-9b61a735343f                   googleapi: Error 400: The disk resource 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-7626b4fc-937b-4365-9347-9b61a735343f' is already being used by 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-zb1j', resourceInUseByAnotherResource\ndefault                              19s         Normal    VolumeDelete              persistentvolume/pvc-83fd9abc-a84e-4314-8567-027cac1839a5                   googleapi: Error 400: The disk resource 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-83fd9abc-a84e-4314-8567-027cac1839a5' is already being used by 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-zb1j', resourceInUseByAnotherResource\ndefault                              4m30s       Normal    VolumeDelete              persistentvolume/pvc-8522b794-7802-4555-9825-4bfe29e9e0fe                   googleapi: Error 400: The disk resource 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-8522b794-7802-4555-9825-4bfe29e9e0fe' is already being used by 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-zb1j', resourceInUseByAnotherResource\ndefault                              4m12s       Normal    VolumeDelete              persistentvolume/pvc-a587e266-5c89-47aa-8439-d2923a8b84b2                   googleapi: Error 400: The disk resource 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-a587e266-5c89-47aa-8439-d2923a8b84b2' is already being used by 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-7fqk', resourceInUseByAnotherResource\ndefault                              4m39s       Normal    VolumeDelete              persistentvolume/pvc-bbd79b6e-d27b-46b2-88e0-52f46f59d091                   googleapi: Error 400: The disk resource 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-bbd79b6e-d27b-46b2-88e0-52f46f59d091' is already being used by 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-zb1j', resourceInUseByAnotherResource\ndefault                              58s         Normal    VolumeDelete              persistentvolume/pvc-eb986de0-7caf-4cbc-86fb-66c0cc778c79                   googleapi: Error 400: The disk resource 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-eb986de0-7caf-4cbc-86fb-66c0cc778c79' is already being used by 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-7fqk', resourceInUseByAnotherResource\ndefault                              57s         Normal    VolumeDelete              persistentvolume/pvc-ed8f87ec-b742-46d6-9e6e-01121e52667e                   googleapi: Error 400: The disk resource 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-ed8f87ec-b742-46d6-9e6e-01121e52667e' is already being used by 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-zb1j', resourceInUseByAnotherResource\ndisruption-6692                      4s          Normal    Scheduled                 pod/pod-0                                                                   Successfully assigned disruption-6692/pod-0 to bootstrap-e2e-minion-group-7fqk\ndisruption-6692                      1s          Normal    Pulled                    pod/pod-0                                                                   Container image \"gcr.io/kubernetes-e2e-test-images/echoserver:2.2\" already present on machine\ndisruption-6692                      1s          Normal    Created                   pod/pod-0                                                                   Created container busybox\ndisruption-6692                      0s          Normal    Started                   pod/pod-0                                                                   Started container busybox\ndisruption-6692                      3s          Normal    Scheduled                 pod/pod-1                                                                   Successfully assigned disruption-6692/pod-1 to bootstrap-e2e-minion-group-7fqk\ndisruption-6692                      3s          Normal    Scheduled                 pod/pod-2                                                                   Successfully assigned disruption-6692/pod-2 to bootstrap-e2e-minion-group-7fqk\ndisruption-6692                      1s          Normal    Pulled                    pod/pod-2                                                                   Container image \"gcr.io/kubernetes-e2e-test-images/echoserver:2.2\" already present on machine\ndisruption-6692                      1s          Normal    Created                   pod/pod-2                                                                   Created container busybox\ndns-6959                             65s         Normal    Scheduled                 pod/dns-test-46e6520e-686d-4f1b-9a02-ec9349f78b48                           Successfully assigned dns-6959/dns-test-46e6520e-686d-4f1b-9a02-ec9349f78b48 to bootstrap-e2e-minion-group-zb1j\ndns-6959                             64s         Normal    Pulling                   pod/dns-test-46e6520e-686d-4f1b-9a02-ec9349f78b48                           Pulling image \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\"\ndns-6959                             63s         Normal    Pulled                    pod/dns-test-46e6520e-686d-4f1b-9a02-ec9349f78b48                           Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\"\ndns-6959                             62s         Normal    Created                   pod/dns-test-46e6520e-686d-4f1b-9a02-ec9349f78b48                           Created container webserver\ndns-6959                             61s         Normal    Started                   pod/dns-test-46e6520e-686d-4f1b-9a02-ec9349f78b48                           Started container webserver\ndns-6959                             61s         Normal    Pulling                   pod/dns-test-46e6520e-686d-4f1b-9a02-ec9349f78b48                           Pulling image \"gcr.io/kubernetes-e2e-test-images/dnsutils:1.1\"\ndns-6959                             56s         Normal    Pulled                    pod/dns-test-46e6520e-686d-4f1b-9a02-ec9349f78b48                           Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/dnsutils:1.1\"\ndns-6959                             56s         Normal    Created                   pod/dns-test-46e6520e-686d-4f1b-9a02-ec9349f78b48                           Created container querier\ndns-6959                             56s         Normal    Started                   pod/dns-test-46e6520e-686d-4f1b-9a02-ec9349f78b48                           Started container querier\ndns-6959                             56s         Normal    Pulling                   pod/dns-test-46e6520e-686d-4f1b-9a02-ec9349f78b48                           Pulling image \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0\"\ndns-6959                             33s         Normal    Pulled                    pod/dns-test-46e6520e-686d-4f1b-9a02-ec9349f78b48                           Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0\"\ndns-6959                             33s         Normal    Created                   pod/dns-test-46e6520e-686d-4f1b-9a02-ec9349f78b48                           Created container jessie-querier\ndns-6959                             32s         Normal    Started                   pod/dns-test-46e6520e-686d-4f1b-9a02-ec9349f78b48                           Started container jessie-querier\ndns-6959                             25s         Normal    Killing                   pod/dns-test-46e6520e-686d-4f1b-9a02-ec9349f78b48                           Stopping container jessie-querier\ndns-6959                             25s         Normal    Killing                   pod/dns-test-46e6520e-686d-4f1b-9a02-ec9349f78b48                           Stopping container querier\ndns-6959                             25s         Normal    Killing                   pod/dns-test-46e6520e-686d-4f1b-9a02-ec9349f78b48                           Stopping container webserver\ndns-6959                             25s         Warning   FailedToUpdateEndpoint    endpoints/dns-test-service-2                                                Failed to update endpoint dns-6959/dns-test-service-2: Operation cannot be fulfilled on endpoints \"dns-test-service-2\": the object has been modified; please apply your changes to the latest version and try again\ndownward-api-2549                    5s          Normal    Scheduled                 pod/downwardapi-volume-cd6c0f7c-2149-4558-b083-4b1d900bf154                 Successfully assigned downward-api-2549/downwardapi-volume-cd6c0f7c-2149-4558-b083-4b1d900bf154 to bootstrap-e2e-minion-group-451g\ndownward-api-2549                    4s          Warning   FailedMount               pod/downwardapi-volume-cd6c0f7c-2149-4558-b083-4b1d900bf154                 MountVolume.SetUp failed for volume \"default-token-7s5rk\" : failed to sync secret cache: timed out waiting for the condition\ndownward-api-2549                    2s          Normal    Pulled                    pod/downwardapi-volume-cd6c0f7c-2149-4558-b083-4b1d900bf154                 Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\ndownward-api-2549                    2s          Normal    Created                   pod/downwardapi-volume-cd6c0f7c-2149-4558-b083-4b1d900bf154                 Created container client-container\ndownward-api-2549                    2s          Normal    Started                   pod/downwardapi-volume-cd6c0f7c-2149-4558-b083-4b1d900bf154                 Started container client-container\ne2e-privileged-pod-5367              14s         Normal    Scheduled                 pod/privileged-pod                                                          Successfully assigned e2e-privileged-pod-5367/privileged-pod to bootstrap-e2e-minion-group-zb1j\ne2e-privileged-pod-5367              11s         Normal    Pulled                    pod/privileged-pod                                                          Container image \"docker.io/library/busybox:1.29\" already present on machine\ne2e-privileged-pod-5367              11s         Normal    Created                   pod/privileged-pod                                                          Created container privileged-container\ne2e-privileged-pod-5367              11s         Normal    Started                   pod/privileged-pod                                                          Started container privileged-container\ne2e-privileged-pod-5367              11s         Normal    Pulled                    pod/privileged-pod                                                          Container image \"docker.io/library/busybox:1.29\" already present on machine\ne2e-privileged-pod-5367              11s         Normal    Created                   pod/privileged-pod                                                          Created container not-privileged-container\ne2e-privileged-pod-5367              10s         Normal    Started                   pod/privileged-pod                                                          Started container not-privileged-container\nemptydir-4695                        18s         Normal    Scheduled                 pod/pod-5a29f708-5c76-4420-9757-97aed7e5c1e7                                Successfully assigned emptydir-4695/pod-5a29f708-5c76-4420-9757-97aed7e5c1e7 to bootstrap-e2e-minion-group-zb1j\nemptydir-4695                        16s         Normal    Pulled                    pod/pod-5a29f708-5c76-4420-9757-97aed7e5c1e7                                Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nemptydir-4695                        16s         Normal    Created                   pod/pod-5a29f708-5c76-4420-9757-97aed7e5c1e7                                Created container test-container\nemptydir-4695                        15s         Normal    Started                   pod/pod-5a29f708-5c76-4420-9757-97aed7e5c1e7                                Started container test-container\nemptydir-5965                        7s          Normal    Scheduled                 pod/pod-da3576d8-56f3-43f5-ac49-3d18ba22ca8f                                Successfully assigned emptydir-5965/pod-da3576d8-56f3-43f5-ac49-3d18ba22ca8f to bootstrap-e2e-minion-group-7fqk\nemptydir-5965                        5s          Normal    Pulled                    pod/pod-da3576d8-56f3-43f5-ac49-3d18ba22ca8f                                Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nemptydir-5965                        4s          Normal    Created                   pod/pod-da3576d8-56f3-43f5-ac49-3d18ba22ca8f                                Created container test-container\nemptydir-5965                        4s          Normal    Started                   pod/pod-da3576d8-56f3-43f5-ac49-3d18ba22ca8f                                Started container test-container\nemptydir-9365                        54s         Normal    Scheduled                 pod/pod-2aaf4d75-1f92-4a4d-a078-b87fb41c3c36                                Successfully assigned emptydir-9365/pod-2aaf4d75-1f92-4a4d-a078-b87fb41c3c36 to bootstrap-e2e-minion-group-zb1j\nemptydir-9365                        50s         Normal    Pulled                    pod/pod-2aaf4d75-1f92-4a4d-a078-b87fb41c3c36                                Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nemptydir-9365                        50s         Normal    Created                   pod/pod-2aaf4d75-1f92-4a4d-a078-b87fb41c3c36                                Created container test-container\nemptydir-9365                        48s         Normal    Started                   pod/pod-2aaf4d75-1f92-4a4d-a078-b87fb41c3c36                                Started container test-container\nephemeral-8495                       4m45s       Normal    Pulled                    pod/csi-hostpath-attacher-0                                                 Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\nephemeral-8495                       4m44s       Normal    Created                   pod/csi-hostpath-attacher-0                                                 Created container csi-attacher\nephemeral-8495                       4m41s       Normal    Started                   pod/csi-hostpath-attacher-0                                                 Started container csi-attacher\nephemeral-8495                       27s         Normal    Killing                   pod/csi-hostpath-attacher-0                                                 Stopping container csi-attacher\nephemeral-8495                       4m57s       Warning   FailedCreate              statefulset/csi-hostpath-attacher                                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-8495                       4m54s       Normal    SuccessfulCreate          statefulset/csi-hostpath-attacher                                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nephemeral-8495                       4m46s       Normal    Pulled                    pod/csi-hostpath-provisioner-0                                              Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nephemeral-8495                       4m46s       Normal    Created                   pod/csi-hostpath-provisioner-0                                              Created container csi-provisioner\nephemeral-8495                       4m43s       Normal    Started                   pod/csi-hostpath-provisioner-0                                              Started container csi-provisioner\nephemeral-8495                       25s         Normal    Killing                   pod/csi-hostpath-provisioner-0                                              Stopping container csi-provisioner\nephemeral-8495                       4m56s       Warning   FailedCreate              statefulset/csi-hostpath-provisioner                                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-8495                       4m54s       Normal    SuccessfulCreate          statefulset/csi-hostpath-provisioner                                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nephemeral-8495                       4m48s       Normal    Pulled                    pod/csi-hostpath-resizer-0                                                  Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\nephemeral-8495                       4m47s       Normal    Created                   pod/csi-hostpath-resizer-0                                                  Created container csi-resizer\nephemeral-8495                       4m45s       Normal    Started                   pod/csi-hostpath-resizer-0                                                  Started container csi-resizer\nephemeral-8495                       25s         Normal    Killing                   pod/csi-hostpath-resizer-0                                                  Stopping container csi-resizer\nephemeral-8495                       4m57s       Warning   FailedCreate              statefulset/csi-hostpath-resizer                                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-8495                       4m56s       Normal    SuccessfulCreate          statefulset/csi-hostpath-resizer                                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nephemeral-8495                       4m56s       Normal    Pulled                    pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nephemeral-8495                       4m55s       Normal    Created                   pod/csi-hostpathplugin-0                                                    Created container node-driver-registrar\nephemeral-8495                       4m52s       Normal    Started                   pod/csi-hostpathplugin-0                                                    Started container node-driver-registrar\nephemeral-8495                       4m52s       Normal    Pulling                   pod/csi-hostpathplugin-0                                                    Pulling image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nephemeral-8495                       4m38s       Normal    Pulled                    pod/csi-hostpathplugin-0                                                    Successfully pulled image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nephemeral-8495                       4m37s       Normal    Created                   pod/csi-hostpathplugin-0                                                    Created container hostpath\nephemeral-8495                       4m33s       Normal    Started                   pod/csi-hostpathplugin-0                                                    Started container hostpath\nephemeral-8495                       4m33s       Normal    Pulling                   pod/csi-hostpathplugin-0                                                    Pulling image \"quay.io/k8scsi/livenessprobe:v1.1.0\"\nephemeral-8495                       4m29s       Normal    Pulled                    pod/csi-hostpathplugin-0                                                    Successfully pulled image \"quay.io/k8scsi/livenessprobe:v1.1.0\"\nephemeral-8495                       4m28s       Normal    Created                   pod/csi-hostpathplugin-0                                                    Created container liveness-probe\nephemeral-8495                       4m24s       Normal    Started                   pod/csi-hostpathplugin-0                                                    Started container liveness-probe\nephemeral-8495                       26s         Normal    Killing                   pod/csi-hostpathplugin-0                                                    Stopping container node-driver-registrar\nephemeral-8495                       26s         Normal    Killing                   pod/csi-hostpathplugin-0                                                    Stopping container liveness-probe\nephemeral-8495                       26s         Normal    Killing                   pod/csi-hostpathplugin-0                                                    Stopping container hostpath\nephemeral-8495                       23s         Warning   Unhealthy                 pod/csi-hostpathplugin-0                                                    Liveness probe failed: Get http://10.64.2.15:9898/healthz: dial tcp 10.64.2.15:9898: connect: connection refused\nephemeral-8495                       4m59s       Normal    SuccessfulCreate          statefulset/csi-hostpathplugin                                              create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nephemeral-8495                       4m48s       Normal    Pulled                    pod/csi-snapshotter-0                                                       Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nephemeral-8495                       4m47s       Normal    Created                   pod/csi-snapshotter-0                                                       Created container csi-snapshotter\nephemeral-8495                       4m45s       Normal    Started                   pod/csi-snapshotter-0                                                       Started container csi-snapshotter\nephemeral-8495                       17s         Warning   FailedMount               pod/csi-snapshotter-0                                                       MountVolume.SetUp failed for volume \"csi-snapshotter-token-cp886\" : secret \"csi-snapshotter-token-cp886\" not found\nephemeral-8495                       4m56s       Warning   FailedCreate              statefulset/csi-snapshotter                                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter failed error: pods \"csi-snapshotter-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-8495                       4m56s       Normal    SuccessfulCreate          statefulset/csi-snapshotter                                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nephemeral-8495                       4m40s       Warning   FailedMount               pod/inline-volume-tester-brlqm                                              MountVolume.SetUp failed for volume \"my-volume-0\" : kubernetes.io/csi: mounter.SetUpAt failed to get CSI client: driver name csi-hostpath-ephemeral-8495 not found in the list of registered CSI drivers\nephemeral-8495                       4m17s       Normal    Pulled                    pod/inline-volume-tester-brlqm                                              Container image \"docker.io/library/busybox:1.29\" already present on machine\nephemeral-8495                       4m16s       Normal    Created                   pod/inline-volume-tester-brlqm                                              Created container csi-volume-tester\nephemeral-8495                       4m9s        Normal    Started                   pod/inline-volume-tester-brlqm                                              Started container csi-volume-tester\nephemeral-8495                       85s         Normal    Killing                   pod/inline-volume-tester-brlqm                                              Stopping container csi-volume-tester\nephemeral-8495                       3m22s       Normal    Pulled                    pod/inline-volume-tester2-djxnf                                             Container image \"docker.io/library/busybox:1.29\" already present on machine\nephemeral-8495                       3m22s       Normal    Created                   pod/inline-volume-tester2-djxnf                                             Created container csi-volume-tester\nephemeral-8495                       3m14s       Normal    Started                   pod/inline-volume-tester2-djxnf                                             Started container csi-volume-tester\nephemeral-8495                       2m37s       Normal    Killing                   pod/inline-volume-tester2-djxnf                                             Stopping container csi-volume-tester\njob-2141                             63s         Normal    Scheduled                 pod/all-pods-removed-8f679                                                  Successfully assigned job-2141/all-pods-removed-8f679 to bootstrap-e2e-minion-group-zb1j\njob-2141                             60s         Normal    Pulled                    pod/all-pods-removed-8f679                                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-2141                             59s         Normal    Created                   pod/all-pods-removed-8f679                                                  Created container c\njob-2141                             59s         Normal    Started                   pod/all-pods-removed-8f679                                                  Started container c\njob-2141                             52s         Normal    Killing                   pod/all-pods-removed-8f679                                                  Stopping container c\njob-2141                             63s         Normal    Scheduled                 pod/all-pods-removed-gx548                                                  Successfully assigned job-2141/all-pods-removed-gx548 to bootstrap-e2e-minion-group-8mzr\njob-2141                             62s         Normal    Pulled                    pod/all-pods-removed-gx548                                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-2141                             62s         Normal    Created                   pod/all-pods-removed-gx548                                                  Created container c\njob-2141                             61s         Normal    Started                   pod/all-pods-removed-gx548                                                  Started container c\njob-2141                             53s         Normal    Killing                   pod/all-pods-removed-gx548                                                  Stopping container c\njob-2141                             64s         Normal    SuccessfulCreate          job/all-pods-removed                                                        Created pod: all-pods-removed-8f679\njob-2141                             63s         Normal    SuccessfulCreate          job/all-pods-removed                                                        Created pod: all-pods-removed-gx548\nkube-system                          8m43s       Warning   FailedScheduling          pod/coredns-65567c7b57-7257w                                                no nodes available to schedule pods\nkube-system                          8m34s       Warning   FailedScheduling          pod/coredns-65567c7b57-7257w                                                0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          8m27s       Warning   FailedScheduling          pod/coredns-65567c7b57-7257w                                                0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m18s       Normal    Scheduled                 pod/coredns-65567c7b57-7257w                                                Successfully assigned kube-system/coredns-65567c7b57-7257w to bootstrap-e2e-minion-group-8mzr\nkube-system                          8m17s       Normal    Pulling                   pod/coredns-65567c7b57-7257w                                                Pulling image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          8m13s       Normal    Pulled                    pod/coredns-65567c7b57-7257w                                                Successfully pulled image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          8m12s       Normal    Created                   pod/coredns-65567c7b57-7257w                                                Created container coredns\nkube-system                          8m11s       Normal    Started                   pod/coredns-65567c7b57-7257w                                                Started container coredns\nkube-system                          8m9s        Normal    Scheduled                 pod/coredns-65567c7b57-x82kv                                                Successfully assigned kube-system/coredns-65567c7b57-x82kv to bootstrap-e2e-minion-group-7fqk\nkube-system                          8m7s        Normal    Pulling                   pod/coredns-65567c7b57-x82kv                                                Pulling image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          8m6s        Normal    Pulled                    pod/coredns-65567c7b57-x82kv                                                Successfully pulled image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          8m5s        Normal    Created                   pod/coredns-65567c7b57-x82kv                                                Created container coredns\nkube-system                          8m5s        Normal    Started                   pod/coredns-65567c7b57-x82kv                                                Started container coredns\nkube-system                          8m48s       Warning   FailedCreate              replicaset/coredns-65567c7b57                                               Error creating: pods \"coredns-65567c7b57-\" is forbidden: no providers available to validate pod request\nkube-system                          8m46s       Warning   FailedCreate              replicaset/coredns-65567c7b57                                               Error creating: pods \"coredns-65567c7b57-\" is forbidden: unable to validate against any pod security policy: []\nkube-system                          8m43s       Normal    SuccessfulCreate          replicaset/coredns-65567c7b57                                               Created pod: coredns-65567c7b57-7257w\nkube-system                          8m9s        Normal    SuccessfulCreate          replicaset/coredns-65567c7b57                                               Created pod: coredns-65567c7b57-x82kv\nkube-system                          8m49s       Normal    ScalingReplicaSet         deployment/coredns                                                          Scaled up replica set coredns-65567c7b57 to 1\nkube-system                          8m9s        Normal    ScalingReplicaSet         deployment/coredns                                                          Scaled up replica set coredns-65567c7b57 to 2\nkube-system                          8m45s       Warning   FailedScheduling          pod/event-exporter-v0.3.1-747b47fcd-dmxj8                                   no nodes available to schedule pods\nkube-system                          8m33s       Warning   FailedScheduling          pod/event-exporter-v0.3.1-747b47fcd-dmxj8                                   0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          8m25s       Warning   FailedScheduling          pod/event-exporter-v0.3.1-747b47fcd-dmxj8                                   0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m23s       Normal    Scheduled                 pod/event-exporter-v0.3.1-747b47fcd-dmxj8                                   Successfully assigned kube-system/event-exporter-v0.3.1-747b47fcd-dmxj8 to bootstrap-e2e-minion-group-451g\nkube-system                          8m20s       Normal    Pulling                   pod/event-exporter-v0.3.1-747b47fcd-dmxj8                                   Pulling image \"k8s.gcr.io/event-exporter:v0.3.1\"\nkube-system                          8m17s       Normal    Pulled                    pod/event-exporter-v0.3.1-747b47fcd-dmxj8                                   Successfully pulled image \"k8s.gcr.io/event-exporter:v0.3.1\"\nkube-system                          8m16s       Normal    Created                   pod/event-exporter-v0.3.1-747b47fcd-dmxj8                                   Created container event-exporter\nkube-system                          8m16s       Normal    Started                   pod/event-exporter-v0.3.1-747b47fcd-dmxj8                                   Started container event-exporter\nkube-system                          8m16s       Normal    Pulling                   pod/event-exporter-v0.3.1-747b47fcd-dmxj8                                   Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.7.2\"\nkube-system                          8m13s       Normal    Pulled                    pod/event-exporter-v0.3.1-747b47fcd-dmxj8                                   Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.7.2\"\nkube-system                          8m12s       Normal    Created                   pod/event-exporter-v0.3.1-747b47fcd-dmxj8                                   Created container prometheus-to-sd-exporter\nkube-system                          8m11s       Normal    Started                   pod/event-exporter-v0.3.1-747b47fcd-dmxj8                                   Started container prometheus-to-sd-exporter\nkube-system                          8m48s       Normal    SuccessfulCreate          replicaset/event-exporter-v0.3.1-747b47fcd                                  Created pod: event-exporter-v0.3.1-747b47fcd-dmxj8\nkube-system                          8m48s       Normal    ScalingReplicaSet         deployment/event-exporter-v0.3.1                                            Scaled up replica set event-exporter-v0.3.1-747b47fcd to 1\nkube-system                          8m41s       Warning   FailedScheduling          pod/fluentd-gcp-scaler-76d9c77b4d-qxs56                                     no nodes available to schedule pods\nkube-system                          8m24s       Warning   FailedScheduling          pod/fluentd-gcp-scaler-76d9c77b4d-qxs56                                     0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m18s       Normal    Scheduled                 pod/fluentd-gcp-scaler-76d9c77b4d-qxs56                                     Successfully assigned kube-system/fluentd-gcp-scaler-76d9c77b4d-qxs56 to bootstrap-e2e-minion-group-8mzr\nkube-system                          8m17s       Normal    Pulling                   pod/fluentd-gcp-scaler-76d9c77b4d-qxs56                                     Pulling image \"k8s.gcr.io/fluentd-gcp-scaler:0.5.2\"\nkube-system                          8m11s       Normal    Pulled                    pod/fluentd-gcp-scaler-76d9c77b4d-qxs56                                     Successfully pulled image \"k8s.gcr.io/fluentd-gcp-scaler:0.5.2\"\nkube-system                          8m11s       Normal    Created                   pod/fluentd-gcp-scaler-76d9c77b4d-qxs56                                     Created container fluentd-gcp-scaler\nkube-system                          8m11s       Normal    Started                   pod/fluentd-gcp-scaler-76d9c77b4d-qxs56                                     Started container fluentd-gcp-scaler\nkube-system                          8m41s       Normal    SuccessfulCreate          replicaset/fluentd-gcp-scaler-76d9c77b4d                                    Created pod: fluentd-gcp-scaler-76d9c77b4d-qxs56\nkube-system                          8m41s       Normal    ScalingReplicaSet         deployment/fluentd-gcp-scaler                                               Scaled up replica set fluentd-gcp-scaler-76d9c77b4d to 1\nkube-system                          7m35s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-4fcmt                                                Successfully assigned kube-system/fluentd-gcp-v3.2.0-4fcmt to bootstrap-e2e-minion-group-451g\nkube-system                          7m34s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-4fcmt                                                Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          7m34s       Normal    Created                   pod/fluentd-gcp-v3.2.0-4fcmt                                                Created container fluentd-gcp\nkube-system                          7m34s       Normal    Started                   pod/fluentd-gcp-v3.2.0-4fcmt                                                Started container fluentd-gcp\nkube-system                          7m34s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-4fcmt                                                Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          7m34s       Normal    Created                   pod/fluentd-gcp-v3.2.0-4fcmt                                                Created container prometheus-to-sd-exporter\nkube-system                          7m33s       Normal    Started                   pod/fluentd-gcp-v3.2.0-4fcmt                                                Started container prometheus-to-sd-exporter\nkube-system                          8m3s        Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-4qnrb                                                Successfully assigned kube-system/fluentd-gcp-v3.2.0-4qnrb to bootstrap-e2e-master\nkube-system                          8m2s        Normal    Pulled                    pod/fluentd-gcp-v3.2.0-4qnrb                                                Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          8m2s        Normal    Created                   pod/fluentd-gcp-v3.2.0-4qnrb                                                Created container fluentd-gcp\nkube-system                          8m2s        Normal    Started                   pod/fluentd-gcp-v3.2.0-4qnrb                                                Started container fluentd-gcp\nkube-system                          8m2s        Normal    Pulled                    pod/fluentd-gcp-v3.2.0-4qnrb                                                Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          8m2s        Normal    Created                   pod/fluentd-gcp-v3.2.0-4qnrb                                                Created container prometheus-to-sd-exporter\nkube-system                          7m57s       Normal    Started                   pod/fluentd-gcp-v3.2.0-4qnrb                                                Started container prometheus-to-sd-exporter\nkube-system                          8m31s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-5jrzr                                                Successfully assigned kube-system/fluentd-gcp-v3.2.0-5jrzr to bootstrap-e2e-minion-group-451g\nkube-system                          8m29s       Normal    Pulling                   pod/fluentd-gcp-v3.2.0-5jrzr                                                Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          8m16s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-5jrzr                                                Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          8m16s       Normal    Created                   pod/fluentd-gcp-v3.2.0-5jrzr                                                Created container fluentd-gcp\nkube-system                          8m15s       Normal    Started                   pod/fluentd-gcp-v3.2.0-5jrzr                                                Started container fluentd-gcp\nkube-system                          8m15s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-5jrzr                                                Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          8m15s       Normal    Created                   pod/fluentd-gcp-v3.2.0-5jrzr                                                Created container prometheus-to-sd-exporter\nkube-system                          8m14s       Normal    Started                   pod/fluentd-gcp-v3.2.0-5jrzr                                                Started container prometheus-to-sd-exporter\nkube-system                          7m41s       Normal    Killing                   pod/fluentd-gcp-v3.2.0-5jrzr                                                Stopping container fluentd-gcp\nkube-system                          7m41s       Normal    Killing                   pod/fluentd-gcp-v3.2.0-5jrzr                                                Stopping container prometheus-to-sd-exporter\nkube-system                          8m30s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-8jtx7                                                Successfully assigned kube-system/fluentd-gcp-v3.2.0-8jtx7 to bootstrap-e2e-minion-group-7fqk\nkube-system                          8m29s       Warning   FailedMount               pod/fluentd-gcp-v3.2.0-8jtx7                                                MountVolume.SetUp failed for volume \"config-volume\" : failed to sync configmap cache: timed out waiting for the condition\nkube-system                          8m29s       Warning   FailedMount               pod/fluentd-gcp-v3.2.0-8jtx7                                                MountVolume.SetUp failed for volume \"fluentd-gcp-token-rz8gk\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          8m28s       Normal    Pulling                   pod/fluentd-gcp-v3.2.0-8jtx7                                                Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          8m20s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-8jtx7                                                Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          8m19s       Normal    Created                   pod/fluentd-gcp-v3.2.0-8jtx7                                                Created container fluentd-gcp\nkube-system                          8m19s       Normal    Started                   pod/fluentd-gcp-v3.2.0-8jtx7                                                Started container fluentd-gcp\nkube-system                          8m19s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-8jtx7                                                Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          8m19s       Normal    Created                   pod/fluentd-gcp-v3.2.0-8jtx7                                                Created container prometheus-to-sd-exporter\nkube-system                          8m18s       Normal    Started                   pod/fluentd-gcp-v3.2.0-8jtx7                                                Started container prometheus-to-sd-exporter\nkube-system                          7m27s       Normal    Killing                   pod/fluentd-gcp-v3.2.0-8jtx7                                                Stopping container fluentd-gcp\nkube-system                          7m27s       Normal    Killing                   pod/fluentd-gcp-v3.2.0-8jtx7                                                Stopping container prometheus-to-sd-exporter\nkube-system                          7m29s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-8k5j7                                                Successfully assigned kube-system/fluentd-gcp-v3.2.0-8k5j7 to bootstrap-e2e-minion-group-8mzr\nkube-system                          7m29s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-8k5j7                                                Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          7m29s       Normal    Created                   pod/fluentd-gcp-v3.2.0-8k5j7                                                Created container fluentd-gcp\nkube-system                          7m29s       Normal    Started                   pod/fluentd-gcp-v3.2.0-8k5j7                                                Started container fluentd-gcp\nkube-system                          7m29s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-8k5j7                                                Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          7m28s       Normal    Created                   pod/fluentd-gcp-v3.2.0-8k5j7                                                Created container prometheus-to-sd-exporter\nkube-system                          7m28s       Normal    Started                   pod/fluentd-gcp-v3.2.0-8k5j7                                                Started container prometheus-to-sd-exporter\nkube-system                          8m30s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-dvpn2                                                Successfully assigned kube-system/fluentd-gcp-v3.2.0-dvpn2 to bootstrap-e2e-minion-group-8mzr\nkube-system                          8m29s       Normal    Pulling                   pod/fluentd-gcp-v3.2.0-dvpn2                                                Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          8m20s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-dvpn2                                                Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          8m20s       Normal    Created                   pod/fluentd-gcp-v3.2.0-dvpn2                                                Created container fluentd-gcp\nkube-system                          8m20s       Normal    Started                   pod/fluentd-gcp-v3.2.0-dvpn2                                                Started container fluentd-gcp\nkube-system                          8m20s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-dvpn2                                                Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          8m20s       Normal    Created                   pod/fluentd-gcp-v3.2.0-dvpn2                                                Created container prometheus-to-sd-exporter\nkube-system                          8m20s       Normal    Started                   pod/fluentd-gcp-v3.2.0-dvpn2                                                Started container prometheus-to-sd-exporter\nkube-system                          7m33s       Normal    Killing                   pod/fluentd-gcp-v3.2.0-dvpn2                                                Stopping container fluentd-gcp\nkube-system                          7m33s       Normal    Killing                   pod/fluentd-gcp-v3.2.0-dvpn2                                                Stopping container prometheus-to-sd-exporter\nkube-system                          7m23s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-qf7gj                                                Successfully assigned kube-system/fluentd-gcp-v3.2.0-qf7gj to bootstrap-e2e-minion-group-7fqk\nkube-system                          7m21s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-qf7gj                                                Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          7m21s       Normal    Created                   pod/fluentd-gcp-v3.2.0-qf7gj                                                Created container fluentd-gcp\nkube-system                          7m21s       Normal    Started                   pod/fluentd-gcp-v3.2.0-qf7gj                                                Started container fluentd-gcp\nkube-system                          7m21s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-qf7gj                                                Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          7m21s       Normal    Created                   pod/fluentd-gcp-v3.2.0-qf7gj                                                Created container prometheus-to-sd-exporter\nkube-system                          7m21s       Normal    Started                   pod/fluentd-gcp-v3.2.0-qf7gj                                                Started container prometheus-to-sd-exporter\nkube-system                          8m35s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-r6vm9                                                Successfully assigned kube-system/fluentd-gcp-v3.2.0-r6vm9 to bootstrap-e2e-master\nkube-system                          8m25s       Normal    Pulling                   pod/fluentd-gcp-v3.2.0-r6vm9                                                Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          8m8s        Normal    Pulled                    pod/fluentd-gcp-v3.2.0-r6vm9                                                Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          8m8s        Warning   Failed                    pod/fluentd-gcp-v3.2.0-r6vm9                                                Error: cannot find volume \"varlog\" to mount into container \"fluentd-gcp\"\nkube-system                          8m8s        Normal    Pulled                    pod/fluentd-gcp-v3.2.0-r6vm9                                                Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          8m8s        Warning   Failed                    pod/fluentd-gcp-v3.2.0-r6vm9                                                Error: cannot find volume \"fluentd-gcp-token-rz8gk\" to mount into container \"prometheus-to-sd-exporter\"\nkube-system                          6m4s        Warning   FailedMount               pod/fluentd-gcp-v3.2.0-r6vm9                                                Unable to attach or mount volumes: unmounted volumes=[varlibdockercontainers config-volume fluentd-gcp-token-rz8gk varlog], unattached volumes=[varlibdockercontainers config-volume fluentd-gcp-token-rz8gk varlog]: timed out waiting for the condition\nkube-system                          8m31s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-xxpsd                                                Successfully assigned kube-system/fluentd-gcp-v3.2.0-xxpsd to bootstrap-e2e-minion-group-zb1j\nkube-system                          8m29s       Normal    Pulling                   pod/fluentd-gcp-v3.2.0-xxpsd                                                Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          8m20s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-xxpsd                                                Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          8m20s       Normal    Created                   pod/fluentd-gcp-v3.2.0-xxpsd                                                Created container fluentd-gcp\nkube-system                          8m20s       Normal    Started                   pod/fluentd-gcp-v3.2.0-xxpsd                                                Started container fluentd-gcp\nkube-system                          8m20s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-xxpsd                                                Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          8m20s       Normal    Created                   pod/fluentd-gcp-v3.2.0-xxpsd                                                Created container prometheus-to-sd-exporter\nkube-system                          8m20s       Normal    Started                   pod/fluentd-gcp-v3.2.0-xxpsd                                                Started container prometheus-to-sd-exporter\nkube-system                          7m55s       Normal    Killing                   pod/fluentd-gcp-v3.2.0-xxpsd                                                Stopping container fluentd-gcp\nkube-system                          7m55s       Normal    Killing                   pod/fluentd-gcp-v3.2.0-xxpsd                                                Stopping container prometheus-to-sd-exporter\nkube-system                          7m43s       Normal    Scheduled                 pod/fluentd-gcp-v3.2.0-zl5qp                                                Successfully assigned kube-system/fluentd-gcp-v3.2.0-zl5qp to bootstrap-e2e-minion-group-zb1j\nkube-system                          7m42s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-zl5qp                                                Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          7m42s       Normal    Created                   pod/fluentd-gcp-v3.2.0-zl5qp                                                Created container fluentd-gcp\nkube-system                          7m42s       Normal    Started                   pod/fluentd-gcp-v3.2.0-zl5qp                                                Started container fluentd-gcp\nkube-system                          7m42s       Normal    Pulled                    pod/fluentd-gcp-v3.2.0-zl5qp                                                Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          7m42s       Normal    Created                   pod/fluentd-gcp-v3.2.0-zl5qp                                                Created container prometheus-to-sd-exporter\nkube-system                          7m42s       Normal    Started                   pod/fluentd-gcp-v3.2.0-zl5qp                                                Started container prometheus-to-sd-exporter\nkube-system                          8m36s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                                Created pod: fluentd-gcp-v3.2.0-r6vm9\nkube-system                          8m32s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                                Created pod: fluentd-gcp-v3.2.0-xxpsd\nkube-system                          8m31s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                                Created pod: fluentd-gcp-v3.2.0-5jrzr\nkube-system                          8m31s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                                Created pod: fluentd-gcp-v3.2.0-dvpn2\nkube-system                          8m30s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                                Created pod: fluentd-gcp-v3.2.0-8jtx7\nkube-system                          8m8s        Normal    SuccessfulDelete          daemonset/fluentd-gcp-v3.2.0                                                Deleted pod: fluentd-gcp-v3.2.0-r6vm9\nkube-system                          8m3s        Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                                Created pod: fluentd-gcp-v3.2.0-4qnrb\nkube-system                          7m55s       Normal    SuccessfulDelete          daemonset/fluentd-gcp-v3.2.0                                                Deleted pod: fluentd-gcp-v3.2.0-xxpsd\nkube-system                          7m43s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                                Created pod: fluentd-gcp-v3.2.0-zl5qp\nkube-system                          7m41s       Normal    SuccessfulDelete          daemonset/fluentd-gcp-v3.2.0                                                Deleted pod: fluentd-gcp-v3.2.0-5jrzr\nkube-system                          7m35s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                                Created pod: fluentd-gcp-v3.2.0-4fcmt\nkube-system                          7m33s       Normal    SuccessfulDelete          daemonset/fluentd-gcp-v3.2.0                                                Deleted pod: fluentd-gcp-v3.2.0-dvpn2\nkube-system                          7m29s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                                Created pod: fluentd-gcp-v3.2.0-8k5j7\nkube-system                          7m27s       Normal    SuccessfulDelete          daemonset/fluentd-gcp-v3.2.0                                                Deleted pod: fluentd-gcp-v3.2.0-8jtx7\nkube-system                          7m23s       Normal    SuccessfulCreate          daemonset/fluentd-gcp-v3.2.0                                                (combined from similar events): Created pod: fluentd-gcp-v3.2.0-qf7gj\nkube-system                          8m24s       Normal    LeaderElection            configmap/ingress-gce-lock                                                  bootstrap-e2e-master_871a0 became leader\nkube-system                          9m5s        Normal    LeaderElection            endpoints/kube-controller-manager                                           bootstrap-e2e-master_f10335bc-f7d7-4fa9-8af8-f8e777e92445 became leader\nkube-system                          9m5s        Normal    LeaderElection            lease/kube-controller-manager                                               bootstrap-e2e-master_f10335bc-f7d7-4fa9-8af8-f8e777e92445 became leader\nkube-system                          8m37s       Warning   FailedScheduling          pod/kube-dns-autoscaler-65bc6d4889-rvq8p                                    no nodes available to schedule pods\nkube-system                          8m34s       Warning   FailedScheduling          pod/kube-dns-autoscaler-65bc6d4889-rvq8p                                    0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          8m31s       Warning   FailedScheduling          pod/kube-dns-autoscaler-65bc6d4889-rvq8p                                    0/4 nodes are available: 1 node(s) were unschedulable, 3 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m26s       Warning   FailedScheduling          pod/kube-dns-autoscaler-65bc6d4889-rvq8p                                    0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m17s       Normal    Scheduled                 pod/kube-dns-autoscaler-65bc6d4889-rvq8p                                    Successfully assigned kube-system/kube-dns-autoscaler-65bc6d4889-rvq8p to bootstrap-e2e-minion-group-zb1j\nkube-system                          8m15s       Normal    Pulling                   pod/kube-dns-autoscaler-65bc6d4889-rvq8p                                    Pulling image \"k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1\"\nkube-system                          8m10s       Normal    Pulled                    pod/kube-dns-autoscaler-65bc6d4889-rvq8p                                    Successfully pulled image \"k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1\"\nkube-system                          8m10s       Normal    Created                   pod/kube-dns-autoscaler-65bc6d4889-rvq8p                                    Created container autoscaler\nkube-system                          8m9s        Normal    Started                   pod/kube-dns-autoscaler-65bc6d4889-rvq8p                                    Started container autoscaler\nkube-system                          8m42s       Warning   FailedCreate              replicaset/kube-dns-autoscaler-65bc6d4889                                   Error creating: pods \"kube-dns-autoscaler-65bc6d4889-\" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount \"kube-dns-autoscaler\" not found\nkube-system                          8m37s       Normal    SuccessfulCreate          replicaset/kube-dns-autoscaler-65bc6d4889                                   Created pod: kube-dns-autoscaler-65bc6d4889-rvq8p\nkube-system                          8m48s       Normal    ScalingReplicaSet         deployment/kube-dns-autoscaler                                              Scaled up replica set kube-dns-autoscaler-65bc6d4889 to 1\nkube-system                          8m4s        Warning   FailedToUpdateEndpoint    endpoints/kube-dns                                                          Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again\nkube-system                          8m31s       Normal    Pulled                    pod/kube-proxy-bootstrap-e2e-minion-group-451g                              Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.810_f437ff75d45517\" already present on machine\nkube-system                          8m31s       Normal    Created                   pod/kube-proxy-bootstrap-e2e-minion-group-451g                              Created container kube-proxy\nkube-system                          8m31s       Normal    Started                   pod/kube-proxy-bootstrap-e2e-minion-group-451g                              Started container kube-proxy\nkube-system                          8m30s       Normal    Pulled                    pod/kube-proxy-bootstrap-e2e-minion-group-7fqk                              Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.810_f437ff75d45517\" already present on machine\nkube-system                          8m30s       Normal    Created                   pod/kube-proxy-bootstrap-e2e-minion-group-7fqk                              Created container kube-proxy\nkube-system                          8m30s       Normal    Started                   pod/kube-proxy-bootstrap-e2e-minion-group-7fqk                              Started container kube-proxy\nkube-system                          8m30s       Normal    Pulled                    pod/kube-proxy-bootstrap-e2e-minion-group-8mzr                              Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.810_f437ff75d45517\" already present on machine\nkube-system                          8m30s       Normal    Created                   pod/kube-proxy-bootstrap-e2e-minion-group-8mzr                              Created container kube-proxy\nkube-system                          8m30s       Normal    Started                   pod/kube-proxy-bootstrap-e2e-minion-group-8mzr                              Started container kube-proxy\nkube-system                          8m31s       Normal    Pulled                    pod/kube-proxy-bootstrap-e2e-minion-group-zb1j                              Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.810_f437ff75d45517\" already present on machine\nkube-system                          8m31s       Normal    Created                   pod/kube-proxy-bootstrap-e2e-minion-group-zb1j                              Created container kube-proxy\nkube-system                          8m31s       Normal    Started                   pod/kube-proxy-bootstrap-e2e-minion-group-zb1j                              Started container kube-proxy\nkube-system                          9m8s        Normal    LeaderElection            endpoints/kube-scheduler                                                    bootstrap-e2e-master_c3e67c8a-e269-45b9-927a-77274498e131 became leader\nkube-system                          9m8s        Normal    LeaderElection            lease/kube-scheduler                                                        bootstrap-e2e-master_c3e67c8a-e269-45b9-927a-77274498e131 became leader\nkube-system                          8m41s       Warning   FailedScheduling          pod/kubernetes-dashboard-7778f8b456-pfn25                                   no nodes available to schedule pods\nkube-system                          8m33s       Warning   FailedScheduling          pod/kubernetes-dashboard-7778f8b456-pfn25                                   0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          8m25s       Warning   FailedScheduling          pod/kubernetes-dashboard-7778f8b456-pfn25                                   0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m17s       Normal    Scheduled                 pod/kubernetes-dashboard-7778f8b456-pfn25                                   Successfully assigned kube-system/kubernetes-dashboard-7778f8b456-pfn25 to bootstrap-e2e-minion-group-451g\nkube-system                          8m14s       Normal    Pulling                   pod/kubernetes-dashboard-7778f8b456-pfn25                                   Pulling image \"k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1\"\nkube-system                          8m9s        Normal    Pulled                    pod/kubernetes-dashboard-7778f8b456-pfn25                                   Successfully pulled image \"k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1\"\nkube-system                          8m8s        Normal    Created                   pod/kubernetes-dashboard-7778f8b456-pfn25                                   Created container kubernetes-dashboard\nkube-system                          8m7s        Normal    Started                   pod/kubernetes-dashboard-7778f8b456-pfn25                                   Started container kubernetes-dashboard\nkube-system                          8m41s       Normal    SuccessfulCreate          replicaset/kubernetes-dashboard-7778f8b456                                  Created pod: kubernetes-dashboard-7778f8b456-pfn25\nkube-system                          8m41s       Normal    ScalingReplicaSet         deployment/kubernetes-dashboard                                             Scaled up replica set kubernetes-dashboard-7778f8b456 to 1\nkube-system                          8m43s       Warning   FailedScheduling          pod/l7-default-backend-678889f899-r7r8g                                     no nodes available to schedule pods\nkube-system                          8m25s       Warning   FailedScheduling          pod/l7-default-backend-678889f899-r7r8g                                     0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m22s       Normal    Scheduled                 pod/l7-default-backend-678889f899-r7r8g                                     Successfully assigned kube-system/l7-default-backend-678889f899-r7r8g to bootstrap-e2e-minion-group-zb1j\nkube-system                          8m14s       Normal    Pulling                   pod/l7-default-backend-678889f899-r7r8g                                     Pulling image \"k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0\"\nkube-system                          8m10s       Normal    Pulled                    pod/l7-default-backend-678889f899-r7r8g                                     Successfully pulled image \"k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0\"\nkube-system                          8m10s       Normal    Created                   pod/l7-default-backend-678889f899-r7r8g                                     Created container default-http-backend\nkube-system                          8m3s        Normal    Started                   pod/l7-default-backend-678889f899-r7r8g                                     Started container default-http-backend\nkube-system                          8m48s       Warning   FailedCreate              replicaset/l7-default-backend-678889f899                                    Error creating: pods \"l7-default-backend-678889f899-\" is forbidden: no providers available to validate pod request\nkube-system                          8m46s       Warning   FailedCreate              replicaset/l7-default-backend-678889f899                                    Error creating: pods \"l7-default-backend-678889f899-\" is forbidden: unable to validate against any pod security policy: []\nkube-system                          8m43s       Normal    SuccessfulCreate          replicaset/l7-default-backend-678889f899                                    Created pod: l7-default-backend-678889f899-r7r8g\nkube-system                          8m48s       Normal    ScalingReplicaSet         deployment/l7-default-backend                                               Scaled up replica set l7-default-backend-678889f899 to 1\nkube-system                          8m40s       Normal    Created                   pod/l7-lb-controller-bootstrap-e2e-master                                   Created container l7-lb-controller\nkube-system                          8m37s       Normal    Started                   pod/l7-lb-controller-bootstrap-e2e-master                                   Started container l7-lb-controller\nkube-system                          8m40s       Normal    Pulled                    pod/l7-lb-controller-bootstrap-e2e-master                                   Container image \"k8s.gcr.io/ingress-gce-glbc-amd64:v1.6.1\" already present on machine\nkube-system                          8m35s       Normal    Scheduled                 pod/metadata-proxy-v0.1-4k8vw                                               Successfully assigned kube-system/metadata-proxy-v0.1-4k8vw to bootstrap-e2e-master\nkube-system                          8m33s       Normal    Pulling                   pod/metadata-proxy-v0.1-4k8vw                                               Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          8m32s       Normal    Pulled                    pod/metadata-proxy-v0.1-4k8vw                                               Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          8m32s       Normal    Created                   pod/metadata-proxy-v0.1-4k8vw                                               Created container metadata-proxy\nkube-system                          8m31s       Normal    Started                   pod/metadata-proxy-v0.1-4k8vw                                               Started container metadata-proxy\nkube-system                          8m31s       Normal    Pulling                   pod/metadata-proxy-v0.1-4k8vw                                               Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          8m29s       Normal    Pulled                    pod/metadata-proxy-v0.1-4k8vw                                               Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          8m28s       Normal    Created                   pod/metadata-proxy-v0.1-4k8vw                                               Created container prometheus-to-sd-exporter\nkube-system                          8m26s       Normal    Started                   pod/metadata-proxy-v0.1-4k8vw                                               Started container prometheus-to-sd-exporter\nkube-system                          8m31s       Normal    Scheduled                 pod/metadata-proxy-v0.1-8nkx6                                               Successfully assigned kube-system/metadata-proxy-v0.1-8nkx6 to bootstrap-e2e-minion-group-zb1j\nkube-system                          8m29s       Normal    Pulling                   pod/metadata-proxy-v0.1-8nkx6                                               Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          8m28s       Normal    Pulled                    pod/metadata-proxy-v0.1-8nkx6                                               Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          8m26s       Normal    Created                   pod/metadata-proxy-v0.1-8nkx6                                               Created container metadata-proxy\nkube-system                          8m25s       Normal    Started                   pod/metadata-proxy-v0.1-8nkx6                                               Started container metadata-proxy\nkube-system                          8m25s       Normal    Pulling                   pod/metadata-proxy-v0.1-8nkx6                                               Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          8m24s       Normal    Pulled                    pod/metadata-proxy-v0.1-8nkx6                                               Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          8m22s       Normal    Created                   pod/metadata-proxy-v0.1-8nkx6                                               Created container prometheus-to-sd-exporter\nkube-system                          8m20s       Normal    Started                   pod/metadata-proxy-v0.1-8nkx6                                               Started container prometheus-to-sd-exporter\nkube-system                          8m31s       Normal    Scheduled                 pod/metadata-proxy-v0.1-b2r77                                               Successfully assigned kube-system/metadata-proxy-v0.1-b2r77 to bootstrap-e2e-minion-group-8mzr\nkube-system                          8m30s       Warning   FailedMount               pod/metadata-proxy-v0.1-b2r77                                               MountVolume.SetUp failed for volume \"metadata-proxy-token-8zwbl\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          8m27s       Normal    Pulling                   pod/metadata-proxy-v0.1-b2r77                                               Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          8m26s       Normal    Pulled                    pod/metadata-proxy-v0.1-b2r77                                               Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          8m24s       Normal    Created                   pod/metadata-proxy-v0.1-b2r77                                               Created container metadata-proxy\nkube-system                          8m23s       Normal    Started                   pod/metadata-proxy-v0.1-b2r77                                               Started container metadata-proxy\nkube-system                          8m23s       Normal    Pulling                   pod/metadata-proxy-v0.1-b2r77                                               Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          8m21s       Normal    Pulled                    pod/metadata-proxy-v0.1-b2r77                                               Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          8m20s       Normal    Created                   pod/metadata-proxy-v0.1-b2r77                                               Created container prometheus-to-sd-exporter\nkube-system                          8m19s       Normal    Started                   pod/metadata-proxy-v0.1-b2r77                                               Started container prometheus-to-sd-exporter\nkube-system                          8m31s       Normal    Scheduled                 pod/metadata-proxy-v0.1-fmpjg                                               Successfully assigned kube-system/metadata-proxy-v0.1-fmpjg to bootstrap-e2e-minion-group-451g\nkube-system                          8m30s       Warning   FailedMount               pod/metadata-proxy-v0.1-fmpjg                                               MountVolume.SetUp failed for volume \"metadata-proxy-token-8zwbl\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          8m28s       Normal    Pulling                   pod/metadata-proxy-v0.1-fmpjg                                               Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          8m26s       Normal    Pulled                    pod/metadata-proxy-v0.1-fmpjg                                               Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          8m25s       Normal    Created                   pod/metadata-proxy-v0.1-fmpjg                                               Created container metadata-proxy\nkube-system                          8m24s       Normal    Started                   pod/metadata-proxy-v0.1-fmpjg                                               Started container metadata-proxy\nkube-system                          8m24s       Normal    Pulling                   pod/metadata-proxy-v0.1-fmpjg                                               Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          8m21s       Normal    Pulled                    pod/metadata-proxy-v0.1-fmpjg                                               Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          8m21s       Normal    Created                   pod/metadata-proxy-v0.1-fmpjg                                               Created container prometheus-to-sd-exporter\nkube-system                          8m19s       Normal    Started                   pod/metadata-proxy-v0.1-fmpjg                                               Started container prometheus-to-sd-exporter\nkube-system                          8m30s       Normal    Scheduled                 pod/metadata-proxy-v0.1-mskxw                                               Successfully assigned kube-system/metadata-proxy-v0.1-mskxw to bootstrap-e2e-minion-group-7fqk\nkube-system                          8m29s       Normal    Pulling                   pod/metadata-proxy-v0.1-mskxw                                               Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          8m27s       Normal    Pulled                    pod/metadata-proxy-v0.1-mskxw                                               Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          8m27s       Normal    Created                   pod/metadata-proxy-v0.1-mskxw                                               Created container metadata-proxy\nkube-system                          8m25s       Normal    Started                   pod/metadata-proxy-v0.1-mskxw                                               Started container metadata-proxy\nkube-system                          8m25s       Normal    Pulling                   pod/metadata-proxy-v0.1-mskxw                                               Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          8m24s       Normal    Pulled                    pod/metadata-proxy-v0.1-mskxw                                               Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          8m22s       Normal    Created                   pod/metadata-proxy-v0.1-mskxw                                               Created container prometheus-to-sd-exporter\nkube-system                          8m20s       Normal    Started                   pod/metadata-proxy-v0.1-mskxw                                               Started container prometheus-to-sd-exporter\nkube-system                          8m36s       Normal    SuccessfulCreate          daemonset/metadata-proxy-v0.1                                               Created pod: metadata-proxy-v0.1-4k8vw\nkube-system                          8m32s       Normal    SuccessfulCreate          daemonset/metadata-proxy-v0.1                                               Created pod: metadata-proxy-v0.1-8nkx6\nkube-system                          8m31s       Normal    SuccessfulCreate          daemonset/metadata-proxy-v0.1                                               Created pod: metadata-proxy-v0.1-fmpjg\nkube-system                          8m31s       Normal    SuccessfulCreate          daemonset/metadata-proxy-v0.1                                               Created pod: metadata-proxy-v0.1-b2r77\nkube-system                          8m31s       Normal    SuccessfulCreate          daemonset/metadata-proxy-v0.1                                               Created pod: metadata-proxy-v0.1-mskxw\nkube-system                          8m4s        Normal    Scheduled                 pod/metrics-server-v0.3.6-5f859c87d6-9kvrd                                  Successfully assigned kube-system/metrics-server-v0.3.6-5f859c87d6-9kvrd to bootstrap-e2e-minion-group-7fqk\nkube-system                          8m3s        Normal    Pulling                   pod/metrics-server-v0.3.6-5f859c87d6-9kvrd                                  Pulling image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          8m2s        Normal    Pulled                    pod/metrics-server-v0.3.6-5f859c87d6-9kvrd                                  Successfully pulled image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          8m1s        Normal    Created                   pod/metrics-server-v0.3.6-5f859c87d6-9kvrd                                  Created container metrics-server\nkube-system                          8m          Normal    Started                   pod/metrics-server-v0.3.6-5f859c87d6-9kvrd                                  Started container metrics-server\nkube-system                          8m          Normal    Pulling                   pod/metrics-server-v0.3.6-5f859c87d6-9kvrd                                  Pulling image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          7m59s       Normal    Pulled                    pod/metrics-server-v0.3.6-5f859c87d6-9kvrd                                  Successfully pulled image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          7m59s       Normal    Created                   pod/metrics-server-v0.3.6-5f859c87d6-9kvrd                                  Created container metrics-server-nanny\nkube-system                          7m58s       Normal    Started                   pod/metrics-server-v0.3.6-5f859c87d6-9kvrd                                  Started container metrics-server-nanny\nkube-system                          8m4s        Normal    SuccessfulCreate          replicaset/metrics-server-v0.3.6-5f859c87d6                                 Created pod: metrics-server-v0.3.6-5f859c87d6-9kvrd\nkube-system                          8m43s       Warning   FailedScheduling          pod/metrics-server-v0.3.6-65d4dc878-fjlh8                                   no nodes available to schedule pods\nkube-system                          8m33s       Warning   FailedScheduling          pod/metrics-server-v0.3.6-65d4dc878-fjlh8                                   0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          8m33s       Warning   FailedScheduling          pod/metrics-server-v0.3.6-65d4dc878-fjlh8                                   0/2 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 1 node(s) were unschedulable.\nkube-system                          8m27s       Warning   FailedScheduling          pod/metrics-server-v0.3.6-65d4dc878-fjlh8                                   0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m18s       Normal    Scheduled                 pod/metrics-server-v0.3.6-65d4dc878-fjlh8                                   Successfully assigned kube-system/metrics-server-v0.3.6-65d4dc878-fjlh8 to bootstrap-e2e-minion-group-zb1j\nkube-system                          8m16s       Normal    Pulling                   pod/metrics-server-v0.3.6-65d4dc878-fjlh8                                   Pulling image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          8m12s       Normal    Pulled                    pod/metrics-server-v0.3.6-65d4dc878-fjlh8                                   Successfully pulled image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          8m10s       Normal    Created                   pod/metrics-server-v0.3.6-65d4dc878-fjlh8                                   Created container metrics-server\nkube-system                          8m10s       Normal    Started                   pod/metrics-server-v0.3.6-65d4dc878-fjlh8                                   Started container metrics-server\nkube-system                          8m10s       Normal    Pulling                   pod/metrics-server-v0.3.6-65d4dc878-fjlh8                                   Pulling image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          8m7s        Normal    Pulled                    pod/metrics-server-v0.3.6-65d4dc878-fjlh8                                   Successfully pulled image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          8m7s        Normal    Created                   pod/metrics-server-v0.3.6-65d4dc878-fjlh8                                   Created container metrics-server-nanny\nkube-system                          8m7s        Normal    Started                   pod/metrics-server-v0.3.6-65d4dc878-fjlh8                                   Started container metrics-server-nanny\nkube-system                          7m58s       Normal    Killing                   pod/metrics-server-v0.3.6-65d4dc878-fjlh8                                   Stopping container metrics-server-nanny\nkube-system                          7m58s       Normal    Killing                   pod/metrics-server-v0.3.6-65d4dc878-fjlh8                                   Stopping container metrics-server\nkube-system                          8m44s       Warning   FailedCreate              replicaset/metrics-server-v0.3.6-65d4dc878                                  Error creating: pods \"metrics-server-v0.3.6-65d4dc878-\" is forbidden: unable to validate against any pod security policy: []\nkube-system                          8m43s       Normal    SuccessfulCreate          replicaset/metrics-server-v0.3.6-65d4dc878                                  Created pod: metrics-server-v0.3.6-65d4dc878-fjlh8\nkube-system                          7m58s       Normal    SuccessfulDelete          replicaset/metrics-server-v0.3.6-65d4dc878                                  Deleted pod: metrics-server-v0.3.6-65d4dc878-fjlh8\nkube-system                          8m45s       Normal    ScalingReplicaSet         deployment/metrics-server-v0.3.6                                            Scaled up replica set metrics-server-v0.3.6-65d4dc878 to 1\nkube-system                          8m4s        Normal    ScalingReplicaSet         deployment/metrics-server-v0.3.6                                            Scaled up replica set metrics-server-v0.3.6-5f859c87d6 to 1\nkube-system                          7m58s       Normal    ScalingReplicaSet         deployment/metrics-server-v0.3.6                                            Scaled down replica set metrics-server-v0.3.6-65d4dc878 to 0\nkube-system                          8m24s       Warning   FailedScheduling          pod/volume-snapshot-controller-0                                            0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          8m20s       Normal    Scheduled                 pod/volume-snapshot-controller-0                                            Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-zb1j\nkube-system                          8m18s       Normal    Pulling                   pod/volume-snapshot-controller-0                                            Pulling image \"quay.io/k8scsi/snapshot-controller:v2.0.0-rc2\"\nkube-system                          8m10s       Normal    Pulled                    pod/volume-snapshot-controller-0                                            Successfully pulled image \"quay.io/k8scsi/snapshot-controller:v2.0.0-rc2\"\nkube-system                          8m10s       Normal    Created                   pod/volume-snapshot-controller-0                                            Created container volume-snapshot-controller\nkube-system                          8m9s        Normal    Started                   pod/volume-snapshot-controller-0                                            Started container volume-snapshot-controller\nkube-system                          8m36s       Normal    SuccessfulCreate          statefulset/volume-snapshot-controller                                      create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful\nkubectl-2255                         9s          Normal    Scheduled                 pod/update-demo-kitten-h7zx5                                                Successfully assigned kubectl-2255/update-demo-kitten-h7zx5 to bootstrap-e2e-minion-group-451g\nkubectl-2255                         8s          Normal    Pulling                   pod/update-demo-kitten-h7zx5                                                Pulling image \"gcr.io/kubernetes-e2e-test-images/kitten:1.0\"\nkubectl-2255                         6s          Normal    Pulled                    pod/update-demo-kitten-h7zx5                                                Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/kitten:1.0\"\nkubectl-2255                         6s          Normal    Created                   pod/update-demo-kitten-h7zx5                                                Created container update-demo\nkubectl-2255                         6s          Normal    Started                   pod/update-demo-kitten-h7zx5                                                Started container update-demo\nkubectl-2255                         9s          Normal    SuccessfulCreate          replicationcontroller/update-demo-kitten                                    Created pod: update-demo-kitten-h7zx5\nkubectl-2255                         34s         Normal    Scheduled                 pod/update-demo-nautilus-4qjw6                                              Successfully assigned kubectl-2255/update-demo-nautilus-4qjw6 to bootstrap-e2e-minion-group-8mzr\nkubectl-2255                         30s         Normal    Pulled                    pod/update-demo-nautilus-4qjw6                                              Container image \"gcr.io/kubernetes-e2e-test-images/nautilus:1.0\" already present on machine\nkubectl-2255                         30s         Normal    Created                   pod/update-demo-nautilus-4qjw6                                              Created container update-demo\nkubectl-2255                         28s         Normal    Started                   pod/update-demo-nautilus-4qjw6                                              Started container update-demo\nkubectl-2255                         34s         Normal    Scheduled                 pod/update-demo-nautilus-jlh2n                                              Successfully assigned kubectl-2255/update-demo-nautilus-jlh2n to bootstrap-e2e-minion-group-451g\nkubectl-2255                         29s         Normal    Pulling                   pod/update-demo-nautilus-jlh2n                                              Pulling image \"gcr.io/kubernetes-e2e-test-images/nautilus:1.0\"\nkubectl-2255                         25s         Normal    Pulled                    pod/update-demo-nautilus-jlh2n                                              Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/nautilus:1.0\"\nkubectl-2255                         25s         Normal    Created                   pod/update-demo-nautilus-jlh2n                                              Created container update-demo\nkubectl-2255                         22s         Normal    Started                   pod/update-demo-nautilus-jlh2n                                              Started container update-demo\nkubectl-2255                         34s         Normal    SuccessfulCreate          replicationcontroller/update-demo-nautilus                                  Created pod: update-demo-nautilus-4qjw6\nkubectl-2255                         34s         Normal    SuccessfulCreate          replicationcontroller/update-demo-nautilus                                  Created pod: update-demo-nautilus-jlh2n\nkubectl-2557                         <unknown>                                                                                                                   some data here\nkubectl-5623                         40s         Normal    Scheduled                 pod/httpd-deployment-5744c88cf4-xkcbf                                       Successfully assigned kubectl-5623/httpd-deployment-5744c88cf4-xkcbf to bootstrap-e2e-minion-group-451g\nkubectl-5623                         40s         Normal    SuccessfulCreate          replicaset/httpd-deployment-5744c88cf4                                      Created pod: httpd-deployment-5744c88cf4-xkcbf\nkubectl-5623                         45s         Normal    Scheduled                 pod/httpd-deployment-78fb455947-bzph7                                       Successfully assigned kubectl-5623/httpd-deployment-78fb455947-bzph7 to bootstrap-e2e-minion-group-8mzr\nkubectl-5623                         43s         Normal    Pulled                    pod/httpd-deployment-78fb455947-bzph7                                       Container image \"docker.io/library/httpd:2.4.39-alpine\" already present on machine\nkubectl-5623                         43s         Normal    Created                   pod/httpd-deployment-78fb455947-bzph7                                       Created container httpd\nkubectl-5623                         43s         Normal    Started                   pod/httpd-deployment-78fb455947-bzph7                                       Started container httpd\nkubectl-5623                         42s         Normal    Scheduled                 pod/httpd-deployment-78fb455947-cnz5l                                       Successfully assigned kubectl-5623/httpd-deployment-78fb455947-cnz5l to bootstrap-e2e-minion-group-451g\nkubectl-5623                         40s         Normal    Pulling                   pod/httpd-deployment-78fb455947-cnz5l                                       Pulling image \"docker.io/library/httpd:2.4.39-alpine\"\nkubectl-5623                         45s         Normal    Scheduled                 pod/httpd-deployment-78fb455947-z9v76                                       Successfully assigned kubectl-5623/httpd-deployment-78fb455947-z9v76 to bootstrap-e2e-minion-group-zb1j\nkubectl-5623                         44s         Warning   FailedMount               pod/httpd-deployment-78fb455947-z9v76                                       MountVolume.SetUp failed for volume \"default-token-9br76\" : failed to sync secret cache: timed out waiting for the condition\nkubectl-5623                         41s         Normal    Pulled                    pod/httpd-deployment-78fb455947-z9v76                                       Container image \"docker.io/library/httpd:2.4.39-alpine\" already present on machine\nkubectl-5623                         41s         Normal    Created                   pod/httpd-deployment-78fb455947-z9v76                                       Created container httpd\nkubectl-5623                         40s         Normal    Started                   pod/httpd-deployment-78fb455947-z9v76                                       Started container httpd\nkubectl-5623                         45s         Normal    SuccessfulCreate          replicaset/httpd-deployment-78fb455947                                      Created pod: httpd-deployment-78fb455947-z9v76\nkubectl-5623                         45s         Normal    SuccessfulCreate          replicaset/httpd-deployment-78fb455947                                      Created pod: httpd-deployment-78fb455947-bzph7\nkubectl-5623                         42s         Normal    SuccessfulCreate          replicaset/httpd-deployment-78fb455947                                      Created pod: httpd-deployment-78fb455947-cnz5l\nkubectl-5623                         46s         Normal    ScalingReplicaSet         deployment/httpd-deployment                                                 Scaled up replica set httpd-deployment-78fb455947 to 2\nkubectl-5623                         42s         Normal    ScalingReplicaSet         deployment/httpd-deployment                                                 Scaled up replica set httpd-deployment-78fb455947 to 3\nkubectl-5623                         40s         Normal    ScalingReplicaSet         deployment/httpd-deployment                                                 Scaled up replica set httpd-deployment-5744c88cf4 to 1\nnettest-2513                         109s        Normal    Scheduled                 pod/netserver-0                                                             Successfully assigned nettest-2513/netserver-0 to bootstrap-e2e-minion-group-451g\nnettest-2513                         99s         Normal    Pulled                    pod/netserver-0                                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-2513                         98s         Normal    Created                   pod/netserver-0                                                             Created container webserver\nnettest-2513                         95s         Normal    Started                   pod/netserver-0                                                             Started container webserver\nnettest-2513                         108s        Normal    Scheduled                 pod/netserver-1                                                             Successfully assigned nettest-2513/netserver-1 to bootstrap-e2e-minion-group-7fqk\nnettest-2513                         107s        Normal    Pulled                    pod/netserver-1                                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-2513                         107s        Normal    Created                   pod/netserver-1                                                             Created container webserver\nnettest-2513                         106s        Normal    Started                   pod/netserver-1                                                             Started container webserver\nnettest-2513                         108s        Normal    Scheduled                 pod/netserver-2                                                             Successfully assigned nettest-2513/netserver-2 to bootstrap-e2e-minion-group-8mzr\nnettest-2513                         107s        Warning   FailedMount               pod/netserver-2                                                             MountVolume.SetUp failed for volume \"default-token-hvjbl\" : failed to sync secret cache: timed out waiting for the condition\nnettest-2513                         105s        Normal    Pulled                    pod/netserver-2                                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-2513                         104s        Normal    Created                   pod/netserver-2                                                             Created container webserver\nnettest-2513                         104s        Normal    Started                   pod/netserver-2                                                             Started container webserver\nnettest-2513                         108s        Normal    Scheduled                 pod/netserver-3                                                             Successfully assigned nettest-2513/netserver-3 to bootstrap-e2e-minion-group-zb1j\nnettest-2513                         106s        Normal    Pulled                    pod/netserver-3                                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-2513                         106s        Normal    Created                   pod/netserver-3                                                             Created container webserver\nnettest-2513                         105s        Normal    Started                   pod/netserver-3                                                             Started container webserver\nnettest-2513                         73s         Normal    Scheduled                 pod/test-container-pod                                                      Successfully assigned nettest-2513/test-container-pod to bootstrap-e2e-minion-group-zb1j\nnettest-2513                         73s         Normal    Pulled                    pod/test-container-pod                                                      Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-2513                         73s         Normal    Created                   pod/test-container-pod                                                      Created container webserver\nnettest-2513                         72s         Normal    Started                   pod/test-container-pod                                                      Started container webserver\nnettest-5312                         2m46s       Normal    Scheduled                 pod/netserver-0                                                             Successfully assigned nettest-5312/netserver-0 to bootstrap-e2e-minion-group-451g\nnettest-5312                         2m31s       Normal    Pulled                    pod/netserver-0                                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-5312                         2m30s       Normal    Created                   pod/netserver-0                                                             Created container webserver\nnettest-5312                         2m25s       Normal    Started                   pod/netserver-0                                                             Started container webserver\nnettest-5312                         2m45s       Normal    Scheduled                 pod/netserver-1                                                             Successfully assigned nettest-5312/netserver-1 to bootstrap-e2e-minion-group-7fqk\nnettest-5312                         2m44s       Normal    Pulled                    pod/netserver-1                                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-5312                         2m44s       Normal    Created                   pod/netserver-1                                                             Created container webserver\nnettest-5312                         2m44s       Normal    Started                   pod/netserver-1                                                             Started container webserver\nnettest-5312                         2m45s       Normal    Scheduled                 pod/netserver-2                                                             Successfully assigned nettest-5312/netserver-2 to bootstrap-e2e-minion-group-8mzr\nnettest-5312                         2m44s       Normal    Pulled                    pod/netserver-2                                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-5312                         2m44s       Normal    Created                   pod/netserver-2                                                             Created container webserver\nnettest-5312                         2m44s       Normal    Started                   pod/netserver-2                                                             Started container webserver\nnettest-5312                         2m45s       Normal    Scheduled                 pod/netserver-3                                                             Successfully assigned nettest-5312/netserver-3 to bootstrap-e2e-minion-group-zb1j\nnettest-5312                         2m44s       Normal    Pulled                    pod/netserver-3                                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-5312                         2m44s       Normal    Created                   pod/netserver-3                                                             Created container webserver\nnettest-5312                         2m44s       Normal    Started                   pod/netserver-3                                                             Started container webserver\nnettest-5312                         113s        Normal    Scheduled                 pod/test-container-pod                                                      Successfully assigned nettest-5312/test-container-pod to bootstrap-e2e-minion-group-7fqk\nnettest-5312                         111s        Normal    Pulled                    pod/test-container-pod                                                      Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-5312                         111s        Normal    Created                   pod/test-container-pod                                                      Created container webserver\nnettest-5312                         110s        Normal    Started                   pod/test-container-pod                                                      Started container webserver\nnettest-8767                         1s          Normal    Scheduled                 pod/host-test-container-pod                                                 Successfully assigned nettest-8767/host-test-container-pod to bootstrap-e2e-minion-group-zb1j\nnettest-8767                         37s         Normal    Scheduled                 pod/netserver-0                                                             Successfully assigned nettest-8767/netserver-0 to bootstrap-e2e-minion-group-451g\nnettest-8767                         31s         Normal    Pulled                    pod/netserver-0                                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-8767                         30s         Normal    Created                   pod/netserver-0                                                             Created container webserver\nnettest-8767                         29s         Normal    Started                   pod/netserver-0                                                             Started container webserver\nnettest-8767                         36s         Normal    Scheduled                 pod/netserver-1                                                             Successfully assigned nettest-8767/netserver-1 to bootstrap-e2e-minion-group-7fqk\nnettest-8767                         35s         Normal    Pulled                    pod/netserver-1                                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-8767                         34s         Normal    Created                   pod/netserver-1                                                             Created container webserver\nnettest-8767                         34s         Normal    Started                   pod/netserver-1                                                             Started container webserver\nnettest-8767                         36s         Normal    Scheduled                 pod/netserver-2                                                             Successfully assigned nettest-8767/netserver-2 to bootstrap-e2e-minion-group-8mzr\nnettest-8767                         34s         Normal    Pulled                    pod/netserver-2                                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-8767                         34s         Normal    Created                   pod/netserver-2                                                             Created container webserver\nnettest-8767                         34s         Normal    Started                   pod/netserver-2                                                             Started container webserver\nnettest-8767                         36s         Normal    Scheduled                 pod/netserver-3                                                             Successfully assigned nettest-8767/netserver-3 to bootstrap-e2e-minion-group-zb1j\nnettest-8767                         33s         Normal    Pulled                    pod/netserver-3                                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-8767                         33s         Normal    Created                   pod/netserver-3                                                             Created container webserver\nnettest-8767                         30s         Normal    Started                   pod/netserver-3                                                             Started container webserver\nnettest-8767                         1s          Normal    Scheduled                 pod/test-container-pod                                                      Successfully assigned nettest-8767/test-container-pod to bootstrap-e2e-minion-group-zb1j\npersistent-local-volumes-test-1020   5s          Warning   FailedMount               pod/hostexec-bootstrap-e2e-minion-group-451g-bdczj                          MountVolume.SetUp failed for volume \"default-token-9mrqh\" : failed to sync secret cache: timed out waiting for the condition\npersistent-local-volumes-test-1020   4s          Normal    Pulled                    pod/hostexec-bootstrap-e2e-minion-group-451g-bdczj                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-1020   4s          Normal    Created                   pod/hostexec-bootstrap-e2e-minion-group-451g-bdczj                          Created container agnhost\npersistent-local-volumes-test-1020   3s          Normal    Started                   pod/hostexec-bootstrap-e2e-minion-group-451g-bdczj                          Started container agnhost\npersistent-local-volumes-test-2511   3m29s       Normal    Pulled                    pod/hostexec-bootstrap-e2e-minion-group-451g-7b4fk                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-2511   3m29s       Normal    Created                   pod/hostexec-bootstrap-e2e-minion-group-451g-7b4fk                          Created container agnhost\npersistent-local-volumes-test-2511   3m24s       Normal    Started                   pod/hostexec-bootstrap-e2e-minion-group-451g-7b4fk                          Started container agnhost\npersistent-local-volumes-test-2511   2m23s       Normal    Scheduled                 pod/security-context-28c3ab17-55a5-478f-8c0a-47904065b069                   Successfully assigned persistent-local-volumes-test-2511/security-context-28c3ab17-55a5-478f-8c0a-47904065b069 to bootstrap-e2e-minion-group-451g\npersistent-local-volumes-test-2511   2m19s       Warning   FailedMount               pod/security-context-28c3ab17-55a5-478f-8c0a-47904065b069                   MountVolume.SetUp failed for volume \"local-pvlgzxc\" : could not get consistent content of /proc/self/mountinfo after 3 attempts\npersistent-local-volumes-test-2511   119s        Normal    Pulled                    pod/security-context-28c3ab17-55a5-478f-8c0a-47904065b069                   Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-2511   119s        Normal    Created                   pod/security-context-28c3ab17-55a5-478f-8c0a-47904065b069                   Created container write-pod\npersistent-local-volumes-test-2511   111s        Normal    Started                   pod/security-context-28c3ab17-55a5-478f-8c0a-47904065b069                   Started container write-pod\npersistent-local-volumes-test-2511   44s         Normal    Killing                   pod/security-context-28c3ab17-55a5-478f-8c0a-47904065b069                   Stopping container write-pod\npersistent-local-volumes-test-2511   77s         Normal    Scheduled                 pod/security-context-5682c6b8-c367-4d1b-8ad5-44deb21bf892                   Successfully assigned persistent-local-volumes-test-2511/security-context-5682c6b8-c367-4d1b-8ad5-44deb21bf892 to bootstrap-e2e-minion-group-451g\npersistent-local-volumes-test-2511   71s         Normal    Pulled                    pod/security-context-5682c6b8-c367-4d1b-8ad5-44deb21bf892                   Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-2511   71s         Normal    Created                   pod/security-context-5682c6b8-c367-4d1b-8ad5-44deb21bf892                   Created container write-pod\npersistent-local-volumes-test-2511   67s         Normal    Started                   pod/security-context-5682c6b8-c367-4d1b-8ad5-44deb21bf892                   Started container write-pod\npersistent-local-volumes-test-2511   44s         Normal    Killing                   pod/security-context-5682c6b8-c367-4d1b-8ad5-44deb21bf892                   Stopping container write-pod\npersistent-local-volumes-test-3984   101s        Normal    Pulled                    pod/hostexec-bootstrap-e2e-minion-group-451g-qc2kw                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-3984   100s        Normal    Created                   pod/hostexec-bootstrap-e2e-minion-group-451g-qc2kw                          Created container agnhost\npersistent-local-volumes-test-3984   97s         Normal    Started                   pod/hostexec-bootstrap-e2e-minion-group-451g-qc2kw                          Started container agnhost\npersistent-local-volumes-test-3984   47s         Normal    Scheduled                 pod/security-context-0c8d5a92-6f46-4f52-be82-258fa4a9ebda                   Successfully assigned persistent-local-volumes-test-3984/security-context-0c8d5a92-6f46-4f52-be82-258fa4a9ebda to bootstrap-e2e-minion-group-451g\npersistent-local-volumes-test-3984   45s         Normal    Pulled                    pod/security-context-0c8d5a92-6f46-4f52-be82-258fa4a9ebda                   Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-3984   45s         Normal    Created                   pod/security-context-0c8d5a92-6f46-4f52-be82-258fa4a9ebda                   Created container write-pod\npersistent-local-volumes-test-3984   44s         Normal    Started                   pod/security-context-0c8d5a92-6f46-4f52-be82-258fa4a9ebda                   Started container write-pod\npersistent-local-volumes-test-3984   23s         Normal    Killing                   pod/security-context-0c8d5a92-6f46-4f52-be82-258fa4a9ebda                   Stopping container write-pod\npersistent-local-volumes-test-3984   72s         Normal    Scheduled                 pod/security-context-2a6a2232-8fde-41c2-9ae9-5db791b2b151                   Successfully assigned persistent-local-volumes-test-3984/security-context-2a6a2232-8fde-41c2-9ae9-5db791b2b151 to bootstrap-e2e-minion-group-451g\npersistent-local-volumes-test-3984   61s         Normal    Pulled                    pod/security-context-2a6a2232-8fde-41c2-9ae9-5db791b2b151                   Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-3984   61s         Normal    Created                   pod/security-context-2a6a2232-8fde-41c2-9ae9-5db791b2b151                   Created container write-pod\npersistent-local-volumes-test-3984   58s         Normal    Started                   pod/security-context-2a6a2232-8fde-41c2-9ae9-5db791b2b151                   Started container write-pod\npersistent-local-volumes-test-3984   23s         Normal    Killing                   pod/security-context-2a6a2232-8fde-41c2-9ae9-5db791b2b151                   Stopping container write-pod\npersistent-local-volumes-test-5239   82s         Normal    Pulled                    pod/hostexec-bootstrap-e2e-minion-group-451g-nkdwz                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-5239   82s         Normal    Created                   pod/hostexec-bootstrap-e2e-minion-group-451g-nkdwz                          Created container agnhost\npersistent-local-volumes-test-5239   81s         Normal    Started                   pod/hostexec-bootstrap-e2e-minion-group-451g-nkdwz                          Started container agnhost\npersistent-local-volumes-test-5239   67s         Normal    Scheduled                 pod/security-context-a6bc4c2a-7530-4077-bb41-cd48782f80fe                   Successfully assigned persistent-local-volumes-test-5239/security-context-a6bc4c2a-7530-4077-bb41-cd48782f80fe to bootstrap-e2e-minion-group-451g\npersistent-local-volumes-test-5239   58s         Normal    Pulled                    pod/security-context-a6bc4c2a-7530-4077-bb41-cd48782f80fe                   Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-5239   57s         Normal    Created                   pod/security-context-a6bc4c2a-7530-4077-bb41-cd48782f80fe                   Created container write-pod\npersistent-local-volumes-test-5239   55s         Normal    Started                   pod/security-context-a6bc4c2a-7530-4077-bb41-cd48782f80fe                   Started container write-pod\npersistent-local-volumes-test-5239   27s         Normal    Killing                   pod/security-context-a6bc4c2a-7530-4077-bb41-cd48782f80fe                   Stopping container write-pod\npersistent-local-volumes-test-6723   14s         Normal    Pulled                    pod/hostexec-bootstrap-e2e-minion-group-451g-7gk6j                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-6723   14s         Normal    Created                   pod/hostexec-bootstrap-e2e-minion-group-451g-7gk6j                          Created container agnhost\npersistent-local-volumes-test-6723   14s         Normal    Started                   pod/hostexec-bootstrap-e2e-minion-group-451g-7gk6j                          Started container agnhost\npersistent-local-volumes-test-7270   33s         Normal    Pulled                    pod/hostexec-bootstrap-e2e-minion-group-451g-srl2x                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-7270   32s         Normal    Created                   pod/hostexec-bootstrap-e2e-minion-group-451g-srl2x                          Created container agnhost\npersistent-local-volumes-test-7270   30s         Normal    Started                   pod/hostexec-bootstrap-e2e-minion-group-451g-srl2x                          Started container agnhost\npersistent-local-volumes-test-7270   17s         Normal    Scheduled                 pod/security-context-e56ab8e1-4362-4965-b36b-cc075b5c59ad                   Successfully assigned persistent-local-volumes-test-7270/security-context-e56ab8e1-4362-4965-b36b-cc075b5c59ad to bootstrap-e2e-minion-group-451g\npersistent-local-volumes-test-7270   14s         Normal    Pulled                    pod/security-context-e56ab8e1-4362-4965-b36b-cc075b5c59ad                   Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-7270   14s         Normal    Created                   pod/security-context-e56ab8e1-4362-4965-b36b-cc075b5c59ad                   Created container write-pod\npersistent-local-volumes-test-7270   14s         Normal    Started                   pod/security-context-e56ab8e1-4362-4965-b36b-cc075b5c59ad                   Started container write-pod\npods-9394                            58s         Normal    Scheduled                 pod/pod-update-activedeadlineseconds-c6ebd3d7-154a-48c0-bc4b-b97c4aa15929   Successfully assigned pods-9394/pod-update-activedeadlineseconds-c6ebd3d7-154a-48c0-bc4b-b97c4aa15929 to bootstrap-e2e-minion-group-8mzr\npods-9394                            56s         Normal    Pulled                    pod/pod-update-activedeadlineseconds-c6ebd3d7-154a-48c0-bc4b-b97c4aa15929   Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\npods-9394                            56s         Normal    Created                   pod/pod-update-activedeadlineseconds-c6ebd3d7-154a-48c0-bc4b-b97c4aa15929   Created container nginx\npods-9394                            56s         Normal    Started                   pod/pod-update-activedeadlineseconds-c6ebd3d7-154a-48c0-bc4b-b97c4aa15929   Started container nginx\npods-9394                            47s         Normal    DeadlineExceeded          pod/pod-update-activedeadlineseconds-c6ebd3d7-154a-48c0-bc4b-b97c4aa15929   Pod was active on the node longer than the specified deadline\npods-9394                            48s         Normal    Killing                   pod/pod-update-activedeadlineseconds-c6ebd3d7-154a-48c0-bc4b-b97c4aa15929   Stopping container nginx\nprovisioning-210                     2m4s        Normal    Started                   pod/csi-hostpathplugin-0                                                    Started container node-driver-registrar\nprovisioning-210                     2m4s        Normal    Pulled                    pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\" already present on machine\nprovisioning-210                     2m4s        Normal    Created                   pod/csi-hostpathplugin-0                                                    Created container hostpath\nprovisioning-210                     118s        Normal    Started                   pod/csi-hostpathplugin-0                                                    Started container hostpath\nprovisioning-210                     118s        Normal    Pulled                    pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/livenessprobe:v1.1.0\" already present on machine\nprovisioning-210                     118s        Normal    Created                   pod/csi-hostpathplugin-0                                                    Created container liveness-probe\nprovisioning-210                     111s        Normal    Started                   pod/csi-hostpathplugin-0                                                    Started container liveness-probe\nprovisioning-210                     54s         Normal    Killing                   pod/csi-hostpathplugin-0                                                    Stopping container node-driver-registrar\nprovisioning-210                     54s         Normal    Killing                   pod/csi-hostpathplugin-0                                                    Stopping container liveness-probe\nprovisioning-210                     54s         Normal    Killing                   pod/csi-hostpathplugin-0                                                    Stopping container hostpath\nprovisioning-210                     50s         Warning   Unhealthy                 pod/csi-hostpathplugin-0                                                    Liveness probe failed: Get http://10.64.2.56:9898/healthz: dial tcp 10.64.2.56:9898: connect: connection refused\nprovisioning-210                     2m25s       Normal    SuccessfulCreate          statefulset/csi-hostpathplugin                                              create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nprovisioning-210                     119s        Normal    Pulled                    pod/csi-snapshotter-0                                                       Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nprovisioning-210                     119s        Normal    Created                   pod/csi-snapshotter-0                                                       Created container csi-snapshotter\nprovisioning-210                     111s        Normal    Started                   pod/csi-snapshotter-0                                                       Started container csi-snapshotter\nprovisioning-210                     51s         Normal    Killing                   pod/csi-snapshotter-0                                                       Stopping container csi-snapshotter\nprovisioning-210                     2m23s       Warning   FailedCreate              statefulset/csi-snapshotter                                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter failed error: pods \"csi-snapshotter-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-210                     2m23s       Normal    SuccessfulCreate          statefulset/csi-snapshotter                                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nprovisioning-210                     104s        Normal    SuccessfulAttachVolume    pod/pod-subpath-test-dynamicpv-xntg                                         AttachVolume.Attach succeeded for volume \"pvc-bab42a9f-7e5a-4efe-bb85-6deb85465ef9\"\nprovisioning-210                     85s         Normal    Pulled                    pod/pod-subpath-test-dynamicpv-xntg                                         Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-210                     85s         Normal    Created                   pod/pod-subpath-test-dynamicpv-xntg                                         Created container test-init-subpath-dynamicpv-xntg\nprovisioning-210                     84s         Normal    Started                   pod/pod-subpath-test-dynamicpv-xntg                                         Started container test-init-subpath-dynamicpv-xntg\nprovisioning-210                     83s         Normal    Pulled                    pod/pod-subpath-test-dynamicpv-xntg                                         Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-210                     83s         Normal    Created                   pod/pod-subpath-test-dynamicpv-xntg                                         Created container test-container-subpath-dynamicpv-xntg\nprovisioning-210                     83s         Normal    Started                   pod/pod-subpath-test-dynamicpv-xntg                                         Started container test-container-subpath-dynamicpv-xntg\nprovisioning-210                     83s         Normal    Pulled                    pod/pod-subpath-test-dynamicpv-xntg                                         Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-210                     82s         Normal    Created                   pod/pod-subpath-test-dynamicpv-xntg                                         Created container test-container-volume-dynamicpv-xntg\nprovisioning-210                     80s         Normal    Started                   pod/pod-subpath-test-dynamicpv-xntg                                         Started container test-container-volume-dynamicpv-xntg\nprovisioning-3382                    13s         Normal    Scheduled                 pod/external-provisioner-lmckf                                              Successfully assigned provisioning-3382/external-provisioner-lmckf to bootstrap-e2e-minion-group-zb1j\nprovisioning-3382                    10s         Normal    Pulling                   pod/external-provisioner-lmckf                                              Pulling image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\"\nprovisioning-4395                    85s         Normal    Scheduled                 pod/gluster-server                                                          Successfully assigned provisioning-4395/gluster-server to bootstrap-e2e-minion-group-7fqk\nprovisioning-4395                    81s         Normal    Pulled                    pod/gluster-server                                                          Container image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\" already present on machine\nprovisioning-4395                    80s         Normal    Created                   pod/gluster-server                                                          Created container gluster-server\nprovisioning-4395                    80s         Normal    Started                   pod/gluster-server                                                          Started container gluster-server\nprovisioning-4395                    64s         Normal    Killing                   pod/gluster-server                                                          Stopping container gluster-server\nprovisioning-4395                    75s         Normal    Scheduled                 pod/pod-subpath-test-inlinevolume-wf4l                                      Successfully assigned provisioning-4395/pod-subpath-test-inlinevolume-wf4l to bootstrap-e2e-minion-group-7fqk\nprovisioning-4395                    72s         Normal    Pulled                    pod/pod-subpath-test-inlinevolume-wf4l                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4395                    71s         Normal    Created                   pod/pod-subpath-test-inlinevolume-wf4l                                      Created container test-init-subpath-inlinevolume-wf4l\nprovisioning-4395                    70s         Normal    Started                   pod/pod-subpath-test-inlinevolume-wf4l                                      Started container test-init-subpath-inlinevolume-wf4l\nprovisioning-4395                    69s         Normal    Pulled                    pod/pod-subpath-test-inlinevolume-wf4l                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4395                    69s         Normal    Created                   pod/pod-subpath-test-inlinevolume-wf4l                                      Created container test-container-subpath-inlinevolume-wf4l\nprovisioning-4395                    68s         Normal    Started                   pod/pod-subpath-test-inlinevolume-wf4l                                      Started container test-container-subpath-inlinevolume-wf4l\nprovisioning-4395                    68s         Normal    Pulled                    pod/pod-subpath-test-inlinevolume-wf4l                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4395                    68s         Normal    Created                   pod/pod-subpath-test-inlinevolume-wf4l                                      Created container test-container-volume-inlinevolume-wf4l\nprovisioning-4395                    67s         Normal    Started                   pod/pod-subpath-test-inlinevolume-wf4l                                      Started container test-container-volume-inlinevolume-wf4l\nprovisioning-4577                    9s          Normal    Pulled                    pod/hostexec-bootstrap-e2e-minion-group-8mzr-2hm7z                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-4577                    9s          Normal    Created                   pod/hostexec-bootstrap-e2e-minion-group-8mzr-2hm7z                          Created container agnhost\nprovisioning-4577                    8s          Normal    Started                   pod/hostexec-bootstrap-e2e-minion-group-8mzr-2hm7z                          Started container agnhost\nprovisioning-4577                    3s          Warning   ProvisioningFailed        persistentvolumeclaim/pvc-9f6sp                                             storageclass.storage.k8s.io \"provisioning-4577\" not found\nprovisioning-4632                    19s         Normal    Pulled                    pod/pod-subpath-test-inlinevolume-524c                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4632                    19s         Normal    Created                   pod/pod-subpath-test-inlinevolume-524c                                      Created container test-init-subpath-inlinevolume-524c\nprovisioning-4632                    18s         Normal    Started                   pod/pod-subpath-test-inlinevolume-524c                                      Started container test-init-subpath-inlinevolume-524c\nprovisioning-4632                    17s         Normal    Pulled                    pod/pod-subpath-test-inlinevolume-524c                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4632                    17s         Normal    Created                   pod/pod-subpath-test-inlinevolume-524c                                      Created container test-container-subpath-inlinevolume-524c\nprovisioning-4632                    15s         Normal    Started                   pod/pod-subpath-test-inlinevolume-524c                                      Started container test-container-subpath-inlinevolume-524c\nprovisioning-4632                    15s         Normal    Pulled                    pod/pod-subpath-test-inlinevolume-524c                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4632                    15s         Normal    Created                   pod/pod-subpath-test-inlinevolume-524c                                      Created container test-container-volume-inlinevolume-524c\nprovisioning-4632                    14s         Normal    Started                   pod/pod-subpath-test-inlinevolume-524c                                      Started container test-container-volume-inlinevolume-524c\nprovisioning-4867                    52s         Normal    Pulled                    pod/csi-hostpath-attacher-0                                                 Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\nprovisioning-4867                    52s         Normal    Created                   pod/csi-hostpath-attacher-0                                                 Created container csi-attacher\nprovisioning-4867                    52s         Normal    Started                   pod/csi-hostpath-attacher-0                                                 Started container csi-attacher\nprovisioning-4867                    58s         Warning   FailedCreate              statefulset/csi-hostpath-attacher                                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-4867                    57s         Normal    SuccessfulCreate          statefulset/csi-hostpath-attacher                                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nprovisioning-4867                    53s         Normal    Pulled                    pod/csi-hostpath-provisioner-0                                              Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nprovisioning-4867                    52s         Normal    Created                   pod/csi-hostpath-provisioner-0                                              Created container csi-provisioner\nprovisioning-4867                    52s         Normal    Started                   pod/csi-hostpath-provisioner-0                                              Started container csi-provisioner\nprovisioning-4867                    58s         Warning   FailedCreate              statefulset/csi-hostpath-provisioner                                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-4867                    57s         Normal    SuccessfulCreate          statefulset/csi-hostpath-provisioner                                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nprovisioning-4867                    54s         Normal    Pulled                    pod/csi-hostpath-resizer-0                                                  Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\nprovisioning-4867                    54s         Normal    Created                   pod/csi-hostpath-resizer-0                                                  Created container csi-resizer\nprovisioning-4867                    52s         Normal    Started                   pod/csi-hostpath-resizer-0                                                  Started container csi-resizer\nprovisioning-4867                    58s         Warning   FailedCreate              statefulset/csi-hostpath-resizer                                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-4867                    58s         Normal    SuccessfulCreate          statefulset/csi-hostpath-resizer                                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nprovisioning-4867                    58s         Normal    ExternalProvisioning      persistentvolumeclaim/csi-hostpathhmm44                                     waiting for a volume to be created, either by external provisioner \"csi-hostpath-provisioning-4867\" or manually created by system administrator\nprovisioning-4867                    51s         Normal    Provisioning              persistentvolumeclaim/csi-hostpathhmm44                                     External provisioner is provisioning volume for claim \"provisioning-4867/csi-hostpathhmm44\"\nprovisioning-4867                    51s         Normal    ProvisioningSucceeded     persistentvolumeclaim/csi-hostpathhmm44                                     Successfully provisioned volume pvc-3bf18753-d8e0-4462-b544-15f817d42488\nprovisioning-4867                    57s         Normal    Pulled                    pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nprovisioning-4867                    57s         Normal    Created                   pod/csi-hostpathplugin-0                                                    Created container node-driver-registrar\nprovisioning-4867                    55s         Normal    Started                   pod/csi-hostpathplugin-0                                                    Started container node-driver-registrar\nprovisioning-4867                    55s         Normal    Pulled                    pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\" already present on machine\nprovisioning-4867                    55s         Normal    Created                   pod/csi-hostpathplugin-0                                                    Created container hostpath\nprovisioning-4867                    54s         Normal    Started                   pod/csi-hostpathplugin-0                                                    Started container hostpath\nprovisioning-4867                    54s         Normal    Pulled                    pod/csi-hostpathplugin-0                                                    Container image \"quay.io/k8scsi/livenessprobe:v1.1.0\" already present on machine\nprovisioning-4867                    54s         Normal    Created                   pod/csi-hostpathplugin-0                                                    Created container liveness-probe\nprovisioning-4867                    52s         Normal    Started                   pod/csi-hostpathplugin-0                                                    Started container liveness-probe\nprovisioning-4867                    60s         Normal    SuccessfulCreate          statefulset/csi-hostpathplugin                                              create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nprovisioning-4867                    54s         Normal    Pulled                    pod/csi-snapshotter-0                                                       Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nprovisioning-4867                    54s         Normal    Created                   pod/csi-snapshotter-0                                                       Created container csi-snapshotter\nprovisioning-4867                    52s         Normal    Started                   pod/csi-snapshotter-0                                                       Started container csi-snapshotter\nprovisioning-4867                    58s         Warning   FailedCreate              statefulset/csi-snapshotter                                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter failed error: pods \"csi-snapshotter-0\" is forbidden: unable to validate against any pod security policy: []\nprovisioning-4867                    58s         Normal    SuccessfulCreate          statefulset/csi-snapshotter                                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nprovisioning-4867                    48s         Normal    SuccessfulAttachVolume    pod/pod-subpath-test-dynamicpv-wgtr                                         AttachVolume.Attach succeeded for volume \"pvc-3bf18753-d8e0-4462-b544-15f817d42488\"\nprovisioning-4867                    37s         Normal    Pulled                    pod/pod-subpath-test-dynamicpv-wgtr                                         Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-4867                    37s         Normal    Created                   pod/pod-subpath-test-dynamicpv-wgtr                                         Created container init-volume-dynamicpv-wgtr\nprovisioning-4867                    37s         Normal    Started                   pod/pod-subpath-test-dynamicpv-wgtr                                         Started container init-volume-dynamicpv-wgtr\nprovisioning-4867                    36s         Normal    Pulled                    pod/pod-subpath-test-dynamicpv-wgtr                                         Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4867                    36s         Normal    Created                   pod/pod-subpath-test-dynamicpv-wgtr                                         Created container test-container-subpath-dynamicpv-wgtr\nprovisioning-4867                    35s         Normal    Started                   pod/pod-subpath-test-dynamicpv-wgtr                                         Started container test-container-subpath-dynamicpv-wgtr\nprovisioning-5143                    39s         Normal    Pulled                    pod/hostexec-bootstrap-e2e-minion-group-7fqk-qfmfp                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-5143                    39s         Normal    Created                   pod/hostexec-bootstrap-e2e-minion-group-7fqk-qfmfp                          Created container agnhost\nprovisioning-5143                    38s         Normal    Started                   pod/hostexec-bootstrap-e2e-minion-group-7fqk-qfmfp                          Started container agnhost\nprovisioning-5143                    16s         Normal    Pulled                    pod/pod-subpath-test-preprovisionedpv-jmfm                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-5143                    16s         Normal    Created                   pod/pod-subpath-test-preprovisionedpv-jmfm                                  Created container init-volume-preprovisionedpv-jmfm\nprovisioning-5143                    16s         Normal    Started                   pod/pod-subpath-test-preprovisionedpv-jmfm                                  Started container init-volume-preprovisionedpv-jmfm\nprovisioning-5143                    15s         Normal    Pulled                    pod/pod-subpath-test-preprovisionedpv-jmfm                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-5143                    15s         Normal    Created                   pod/pod-subpath-test-preprovisionedpv-jmfm                                  Created container test-init-volume-preprovisionedpv-jmfm\nprovisioning-5143                    14s         Normal    Started                   pod/pod-subpath-test-preprovisionedpv-jmfm                                  Started container test-init-volume-preprovisionedpv-jmfm\nprovisioning-5143                    12s         Normal    Pulled                    pod/pod-subpath-test-preprovisionedpv-jmfm                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-5143                    12s         Normal    Created                   pod/pod-subpath-test-preprovisionedpv-jmfm                                  Created container test-container-subpath-preprovisionedpv-jmfm\nprovisioning-5143                    10s         Normal    Started                   pod/pod-subpath-test-preprovisionedpv-jmfm                                  Started container test-container-subpath-preprovisionedpv-jmfm\nprovisioning-5143                    35s         Warning   ProvisioningFailed        persistentvolumeclaim/pvc-d7ngh                                             storageclass.storage.k8s.io \"provisioning-5143\" not found\nprovisioning-6204                    57s         Normal    Scheduled                 pod/pod-subpath-test-inlinevolume-trdl                                      Successfully assigned provisioning-6204/pod-subpath-test-inlinevolume-trdl to bootstrap-e2e-minion-group-8mzr\nprovisioning-6204                    53s         Normal    Pulled                    pod/pod-subpath-test-inlinevolume-trdl                                      Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-6204                    53s         Normal    Created                   pod/pod-subpath-test-inlinevolume-trdl                                      Created container init-volume-inlinevolume-trdl\nprovisioning-6204                    51s         Normal    Started                   pod/pod-subpath-test-inlinevolume-trdl                                      Started container init-volume-inlinevolume-trdl\nprovisioning-6204                    48s         Normal    Pulled                    pod/pod-subpath-test-inlinevolume-trdl                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-6204                    48s         Normal    Created                   pod/pod-subpath-test-inlinevolume-trdl                                      Created container test-container-subpath-inlinevolume-trdl\nprovisioning-6204                    48s         Normal    Started                   pod/pod-subpath-test-inlinevolume-trdl                                      Started container test-container-subpath-inlinevolume-trdl\nprovisioning-8967                    56s         Normal    Pulled                    pod/hostexec-bootstrap-e2e-minion-group-7fqk-z7wtm                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-8967                    55s         Normal    Created                   pod/hostexec-bootstrap-e2e-minion-group-7fqk-z7wtm                          Created container agnhost\nprovisioning-8967                    54s         Normal    Started                   pod/hostexec-bootstrap-e2e-minion-group-7fqk-z7wtm                          Started container agnhost\nprovisioning-8967                    17s         Normal    Killing                   pod/hostexec-bootstrap-e2e-minion-group-7fqk-z7wtm                          Stopping container agnhost\nprovisioning-8967                    28s         Normal    Pulled                    pod/pod-subpath-test-preprovisionedpv-l276                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-8967                    28s         Normal    Created                   pod/pod-subpath-test-preprovisionedpv-l276                                  Created container test-init-subpath-preprovisionedpv-l276\nprovisioning-8967                    27s         Normal    Started                   pod/pod-subpath-test-preprovisionedpv-l276                                  Started container test-init-subpath-preprovisionedpv-l276\nprovisioning-8967                    25s         Normal    Pulled                    pod/pod-subpath-test-preprovisionedpv-l276                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-8967                    25s         Normal    Created                   pod/pod-subpath-test-preprovisionedpv-l276                                  Created container test-container-subpath-preprovisionedpv-l276\nprovisioning-8967                    24s         Normal    Started                   pod/pod-subpath-test-preprovisionedpv-l276                                  Started container test-container-subpath-preprovisionedpv-l276\nprovisioning-8967                    24s         Normal    Pulled                    pod/pod-subpath-test-preprovisionedpv-l276                                  Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-8967                    24s         Normal    Created                   pod/pod-subpath-test-preprovisionedpv-l276                                  Created container test-container-volume-preprovisionedpv-l276\nprovisioning-8967                    23s         Normal    Started                   pod/pod-subpath-test-preprovisionedpv-l276                                  Started container test-container-volume-preprovisionedpv-l276\nprovisioning-8967                    50s         Warning   ProvisioningFailed        persistentvolumeclaim/pvc-9bfsn                                             storageclass.storage.k8s.io \"provisioning-8967\" not found\npv-421                               59s         Normal    Scheduled                 pod/pod-ephm-test-projected-787p                                            Successfully assigned pv-421/pod-ephm-test-projected-787p to bootstrap-e2e-minion-group-8mzr\npv-421                               58s         Warning   FailedMount               pod/pod-ephm-test-projected-787p                                            MountVolume.SetUp failed for volume \"default-token-2hpbl\" : failed to sync secret cache: timed out waiting for the condition\npv-421                               58s         Warning   FailedMount               pod/pod-ephm-test-projected-787p                                            MountVolume.SetUp failed for volume \"test-volume\" : failed to sync secret cache: timed out waiting for the condition\npv-421                               26s         Warning   FailedMount               pod/pod-ephm-test-projected-787p                                            MountVolume.SetUp failed for volume \"test-volume\" : secret \"secret-pod-ephm-test\" not found\nsecrets-1481                         43s         Normal    Scheduled                 pod/pod-secrets-7962e4fa-d2f5-485e-b8e1-2bc74993b58f                        Successfully assigned secrets-1481/pod-secrets-7962e4fa-d2f5-485e-b8e1-2bc74993b58f to bootstrap-e2e-minion-group-8mzr\nsecrets-1481                         42s         Normal    Pulled                    pod/pod-secrets-7962e4fa-d2f5-485e-b8e1-2bc74993b58f                        Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nsecrets-1481                         42s         Normal    Created                   pod/pod-secrets-7962e4fa-d2f5-485e-b8e1-2bc74993b58f                        Created container secret-volume-test\nsecrets-1481                         41s         Normal    Started                   pod/pod-secrets-7962e4fa-d2f5-485e-b8e1-2bc74993b58f                        Started container secret-volume-test\nsecrets-1813                         55s         Normal    Scheduled                 pod/pod-secrets-49cc97de-3166-43d2-9ccb-aa3e0d0b43b7                        Successfully assigned secrets-1813/pod-secrets-49cc97de-3166-43d2-9ccb-aa3e0d0b43b7 to bootstrap-e2e-minion-group-8mzr\nsecrets-1813                         50s         Normal    Pulled                    pod/pod-secrets-49cc97de-3166-43d2-9ccb-aa3e0d0b43b7                        Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nsecrets-1813                         50s         Normal    Created                   pod/pod-secrets-49cc97de-3166-43d2-9ccb-aa3e0d0b43b7                        Created container dels-volume-test\nsecrets-1813                         50s         Normal    Started                   pod/pod-secrets-49cc97de-3166-43d2-9ccb-aa3e0d0b43b7                        Started container dels-volume-test\nsecrets-1813                         50s         Normal    Pulled                    pod/pod-secrets-49cc97de-3166-43d2-9ccb-aa3e0d0b43b7                        Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nsecrets-1813                         50s         Normal    Created                   pod/pod-secrets-49cc97de-3166-43d2-9ccb-aa3e0d0b43b7                        Created container upds-volume-test\nsecrets-1813                         49s         Normal    Started                   pod/pod-secrets-49cc97de-3166-43d2-9ccb-aa3e0d0b43b7                        Started container upds-volume-test\nsecrets-1813                         49s         Normal    Pulled                    pod/pod-secrets-49cc97de-3166-43d2-9ccb-aa3e0d0b43b7                        Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nsecrets-1813                         49s         Normal    Created                   pod/pod-secrets-49cc97de-3166-43d2-9ccb-aa3e0d0b43b7                        Created container creates-volume-test\nsecrets-1813                         49s         Normal    Started                   pod/pod-secrets-49cc97de-3166-43d2-9ccb-aa3e0d0b43b7                        Started container creates-volume-test\nsecrets-2096                         36s         Normal    Scheduled                 pod/pod-configmaps-57156003-5992-4493-89ea-31cedd570040                     Successfully assigned secrets-2096/pod-configmaps-57156003-5992-4493-89ea-31cedd570040 to bootstrap-e2e-minion-group-8mzr\nsecrets-2096                         34s         Normal    Pulled                    pod/pod-configmaps-57156003-5992-4493-89ea-31cedd570040                     Container image \"docker.io/library/busybox:1.29\" already present on machine\nsecrets-2096                         34s         Normal    Created                   pod/pod-configmaps-57156003-5992-4493-89ea-31cedd570040                     Created container env-test\nsecrets-2096                         34s         Normal    Started                   pod/pod-configmaps-57156003-5992-4493-89ea-31cedd570040                     Started container env-test\nsecurity-context-820                 26s         Normal    Scheduled                 pod/security-context-868fe0ec-9a5e-49fb-a9df-a2eb5eee03ac                   Successfully assigned security-context-820/security-context-868fe0ec-9a5e-49fb-a9df-a2eb5eee03ac to bootstrap-e2e-minion-group-zb1j\nsecurity-context-820                 24s         Normal    Pulled                    pod/security-context-868fe0ec-9a5e-49fb-a9df-a2eb5eee03ac                   Container image \"docker.io/library/busybox:1.29\" already present on machine\nsecurity-context-820                 24s         Normal    Created                   pod/security-context-868fe0ec-9a5e-49fb-a9df-a2eb5eee03ac                   Created container test-container\nsecurity-context-820                 21s         Normal    Started                   pod/security-context-868fe0ec-9a5e-49fb-a9df-a2eb5eee03ac                   Started container test-container\nservices-8757                        46s         Normal    Scheduled                 pod/execpodxm9w7                                                            Successfully assigned services-8757/execpodxm9w7 to bootstrap-e2e-minion-group-8mzr\nservices-8757                        44s         Normal    Pulled                    pod/execpodxm9w7                                                            Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-8757                        44s         Normal    Created                   pod/execpodxm9w7                                                            Created container agnhost-pause\nservices-8757                        43s         Normal    Started                   pod/execpodxm9w7                                                            Started container agnhost-pause\nservices-8757                        57s         Normal    Scheduled                 pod/nodeport-update-service-dhdl9                                           Successfully assigned services-8757/nodeport-update-service-dhdl9 to bootstrap-e2e-minion-group-zb1j\nservices-8757                        56s         Warning   FailedMount               pod/nodeport-update-service-dhdl9                                           MountVolume.SetUp failed for volume \"default-token-cp2dr\" : failed to sync secret cache: timed out waiting for the condition\nservices-8757                        52s         Normal    Pulled                    pod/nodeport-update-service-dhdl9                                           Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-8757                        52s         Normal    Created                   pod/nodeport-update-service-dhdl9                                           Created container nodeport-update-service\nservices-8757                        51s         Normal    Started                   pod/nodeport-update-service-dhdl9                                           Started container nodeport-update-service\nservices-8757                        58s         Normal    Scheduled                 pod/nodeport-update-service-qt64x                                           Successfully assigned services-8757/nodeport-update-service-qt64x to bootstrap-e2e-minion-group-8mzr\nservices-8757                        56s         Normal    Pulled                    pod/nodeport-update-service-qt64x                                           Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-8757                        56s         Normal    Created                   pod/nodeport-update-service-qt64x                                           Created container nodeport-update-service\nservices-8757                        55s         Normal    Started                   pod/nodeport-update-service-qt64x                                           Started container nodeport-update-service\nservices-8757                        58s         Normal    SuccessfulCreate          replicationcontroller/nodeport-update-service                               Created pod: nodeport-update-service-qt64x\nservices-8757                        57s         Normal    SuccessfulCreate          replicationcontroller/nodeport-update-service                               Created pod: nodeport-update-service-dhdl9\nstatefulset-4967                     3m4s        Normal    Scheduled                 pod/ss2-0                                                                   Successfully assigned statefulset-4967/ss2-0 to bootstrap-e2e-minion-group-zb1j\nstatefulset-4967                     3m2s        Warning   FailedMount               pod/ss2-0                                                                   MountVolume.SetUp failed for volume \"default-token-npxbk\" : failed to sync secret cache: timed out waiting for the condition\nstatefulset-4967                     3m1s        Normal    Pulled                    pod/ss2-0                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-4967                     3m1s        Normal    Created                   pod/ss2-0                                                                   Created container webserver\nstatefulset-4967                     3m1s        Normal    Started                   pod/ss2-0                                                                   Started container webserver\nstatefulset-4967                     79s         Normal    Killing                   pod/ss2-0                                                                   Stopping container webserver\nstatefulset-4967                     72s         Normal    Scheduled                 pod/ss2-0                                                                   Successfully assigned statefulset-4967/ss2-0 to bootstrap-e2e-minion-group-zb1j\nstatefulset-4967                     70s         Warning   FailedMount               pod/ss2-0                                                                   MountVolume.SetUp failed for volume \"default-token-npxbk\" : failed to sync secret cache: timed out waiting for the condition\nstatefulset-4967                     68s         Normal    Pulling                   pod/ss2-0                                                                   Pulling image \"docker.io/library/httpd:2.4.39-alpine\"\nstatefulset-4967                     44s         Normal    Pulled                    pod/ss2-0                                                                   Successfully pulled image \"docker.io/library/httpd:2.4.39-alpine\"\nstatefulset-4967                     44s         Normal    Created                   pod/ss2-0                                                                   Created container webserver\nstatefulset-4967                     43s         Normal    Started                   pod/ss2-0                                                                   Started container webserver\nstatefulset-4967                     2m58s       Normal    Scheduled                 pod/ss2-1                                                                   Successfully assigned statefulset-4967/ss2-1 to bootstrap-e2e-minion-group-7fqk\nstatefulset-4967                     2m57s       Normal    Pulled                    pod/ss2-1                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-4967                     2m57s       Normal    Created                   pod/ss2-1                                                                   Created container webserver\nstatefulset-4967                     2m57s       Normal    Started                   pod/ss2-1                                                                   Started container webserver\nstatefulset-4967                     2m21s       Warning   Unhealthy                 pod/ss2-1                                                                   Readiness probe failed: HTTP probe failed with statuscode: 404\nstatefulset-4967                     93s         Normal    Scheduled                 pod/ss2-1                                                                   Successfully assigned statefulset-4967/ss2-1 to bootstrap-e2e-minion-group-7fqk\nstatefulset-4967                     92s         Warning   FailedMount               pod/ss2-1                                                                   MountVolume.SetUp failed for volume \"default-token-npxbk\" : failed to sync secret cache: timed out waiting for the condition\nstatefulset-4967                     90s         Normal    Pulling                   pod/ss2-1                                                                   Pulling image \"docker.io/library/httpd:2.4.39-alpine\"\nstatefulset-4967                     83s         Normal    Pulled                    pod/ss2-1                                                                   Successfully pulled image \"docker.io/library/httpd:2.4.39-alpine\"\nstatefulset-4967                     83s         Normal    Created                   pod/ss2-1                                                                   Created container webserver\nstatefulset-4967                     82s         Normal    Started                   pod/ss2-1                                                                   Started container webserver\nstatefulset-4967                     20s         Warning   Unhealthy                 pod/ss2-1                                                                   Readiness probe failed: HTTP probe failed with statuscode: 404\nstatefulset-4967                     9s          Normal    Killing                   pod/ss2-1                                                                   Stopping container webserver\nstatefulset-4967                     8s          Warning   Unhealthy                 pod/ss2-1                                                                   Readiness probe failed: Get http://10.64.4.68:80/index.html: read tcp 10.64.4.1:55590->10.64.4.68:80: read: connection reset by peer\nstatefulset-4967                     2m55s       Normal    Scheduled                 pod/ss2-2                                                                   Successfully assigned statefulset-4967/ss2-2 to bootstrap-e2e-minion-group-8mzr\nstatefulset-4967                     2m54s       Normal    Pulled                    pod/ss2-2                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-4967                     2m54s       Normal    Created                   pod/ss2-2                                                                   Created container webserver\nstatefulset-4967                     2m53s       Normal    Started                   pod/ss2-2                                                                   Started container webserver\nstatefulset-4967                     2m15s       Normal    Killing                   pod/ss2-2                                                                   Stopping container webserver\nstatefulset-4967                     2m15s       Warning   Unhealthy                 pod/ss2-2                                                                   Readiness probe failed: Get http://10.64.3.44:80/index.html: dial tcp 10.64.3.44:80: connect: connection refused\nstatefulset-4967                     2m2s        Normal    Scheduled                 pod/ss2-2                                                                   Successfully assigned statefulset-4967/ss2-2 to bootstrap-e2e-minion-group-8mzr\nstatefulset-4967                     119s        Normal    Pulling                   pod/ss2-2                                                                   Pulling image \"docker.io/library/httpd:2.4.39-alpine\"\nstatefulset-4967                     109s        Normal    Pulled                    pod/ss2-2                                                                   Successfully pulled image \"docker.io/library/httpd:2.4.39-alpine\"\nstatefulset-4967                     109s        Normal    Created                   pod/ss2-2                                                                   Created container webserver\nstatefulset-4967                     108s        Normal    Started                   pod/ss2-2                                                                   Started container webserver\nstatefulset-4967                     19s         Normal    Killing                   pod/ss2-2                                                                   Stopping container webserver\nstatefulset-4967                     19s         Warning   Unhealthy                 pod/ss2-2                                                                   Readiness probe failed: Get http://10.64.3.56:80/index.html: dial tcp 10.64.3.56:80: connect: connection refused\nstatefulset-4967                     13s         Normal    Scheduled                 pod/ss2-2                                                                   Successfully assigned statefulset-4967/ss2-2 to bootstrap-e2e-minion-group-8mzr\nstatefulset-4967                     12s         Normal    Pulled                    pod/ss2-2                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-4967                     12s         Normal    Created                   pod/ss2-2                                                                   Created container webserver\nstatefulset-4967                     12s         Normal    Started                   pod/ss2-2                                                                   Started container webserver\nstatefulset-4967                     72s         Normal    SuccessfulCreate          statefulset/ss2                                                             create Pod ss2-0 in StatefulSet ss2 successful\nstatefulset-4967                     94s         Normal    SuccessfulCreate          statefulset/ss2                                                             create Pod ss2-1 in StatefulSet ss2 successful\nstatefulset-4967                     13s         Normal    SuccessfulCreate          statefulset/ss2                                                             create Pod ss2-2 in StatefulSet ss2 successful\nstatefulset-4967                     19s         Normal    SuccessfulDelete          statefulset/ss2                                                             delete Pod ss2-2 in StatefulSet ss2 successful\nstatefulset-4967                     9s          Normal    SuccessfulDelete          statefulset/ss2                                                             delete Pod ss2-1 in StatefulSet ss2 successful\nstatefulset-4967                     79s         Normal    SuccessfulDelete          statefulset/ss2                                                             delete Pod ss2-0 in StatefulSet ss2 successful\nstatefulset-4967                     3m3s        Warning   FailedToUpdateEndpoint    endpoints/test                                                              Failed to update endpoint statefulset-4967/test: Operation cannot be fulfilled on endpoints \"test\": the object has been modified; please apply your changes to the latest version and try again\nstatefulset-5103                     18s         Normal    Scheduled                 pod/ss2-0                                                                   Successfully assigned statefulset-5103/ss2-0 to bootstrap-e2e-minion-group-8mzr\nstatefulset-5103                     16s         Normal    Pulled                    pod/ss2-0                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-5103                     16s         Normal    Created                   pod/ss2-0                                                                   Created container webserver\nstatefulset-5103                     16s         Normal    Started                   pod/ss2-0                                                                   Started container webserver\nstatefulset-5103                     12s         Normal    Scheduled                 pod/ss2-1                                                                   Successfully assigned statefulset-5103/ss2-1 to bootstrap-e2e-minion-group-zb1j\nstatefulset-5103                     11s         Normal    Pulled                    pod/ss2-1                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-5103                     11s         Normal    Created                   pod/ss2-1                                                                   Created container webserver\nstatefulset-5103                     9s          Normal    Started                   pod/ss2-1                                                                   Started container webserver\nstatefulset-5103                     7s          Normal    Scheduled                 pod/ss2-2                                                                   Successfully assigned statefulset-5103/ss2-2 to bootstrap-e2e-minion-group-451g\nstatefulset-5103                     6s          Normal    Pulled                    pod/ss2-2                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-5103                     6s          Normal    Created                   pod/ss2-2                                                                   Created container webserver\nstatefulset-5103                     5s          Normal    Started                   pod/ss2-2                                                                   Started container webserver\nstatefulset-5103                     18s         Normal    SuccessfulCreate          statefulset/ss2                                                             create Pod ss2-0 in StatefulSet ss2 successful\nstatefulset-5103                     13s         Normal    SuccessfulCreate          statefulset/ss2                                                             create Pod ss2-1 in StatefulSet ss2 successful\nstatefulset-5103                     7s          Normal    SuccessfulCreate          statefulset/ss2                                                             create Pod ss2-2 in StatefulSet ss2 successful\nsvc-latency-9315                     2m15s       Normal    Scheduled                 pod/svc-latency-rc-xwdrh                                                    Successfully assigned svc-latency-9315/svc-latency-rc-xwdrh to bootstrap-e2e-minion-group-7fqk\nsvc-latency-9315                     2m14s       Warning   FailedMount               pod/svc-latency-rc-xwdrh                                                    MountVolume.SetUp failed for volume \"default-token-7zskn\" : failed to sync secret cache: timed out waiting for the condition\nsvc-latency-9315                     2m13s       Normal    Pulled                    pod/svc-latency-rc-xwdrh                                                    Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nsvc-latency-9315                     2m13s       Normal    Created                   pod/svc-latency-rc-xwdrh                                                    Created container svc-latency-rc\nsvc-latency-9315                     2m12s       Normal    Started                   pod/svc-latency-rc-xwdrh                                                    Started container svc-latency-rc\nsvc-latency-9315                     2m16s       Normal    SuccessfulCreate          replicationcontroller/svc-latency-rc                                        Created pod: svc-latency-rc-xwdrh\nvolume-201                           70s         Normal    Scheduled                 pod/gcepd-client                                                            Successfully assigned volume-201/gcepd-client to bootstrap-e2e-minion-group-zb1j\nvolume-201                           62s         Normal    SuccessfulAttachVolume    pod/gcepd-client                                                            AttachVolume.Attach succeeded for volume \"pvc-83fd9abc-a84e-4314-8567-027cac1839a5\"\nvolume-201                           48s         Normal    Pulled                    pod/gcepd-client                                                            Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-201                           47s         Normal    Created                   pod/gcepd-client                                                            Created container gcepd-client\nvolume-201                           47s         Normal    Started                   pod/gcepd-client                                                            Started container gcepd-client\nvolume-201                           38s         Normal    Killing                   pod/gcepd-client                                                            Stopping container gcepd-client\nvolume-201                           112s        Normal    Scheduled                 pod/gcepd-injector                                                          Successfully assigned volume-201/gcepd-injector to bootstrap-e2e-minion-group-zb1j\nvolume-201                           107s        Normal    SuccessfulAttachVolume    pod/gcepd-injector                                                          AttachVolume.Attach succeeded for volume \"pvc-83fd9abc-a84e-4314-8567-027cac1839a5\"\nvolume-201                           97s         Normal    Pulled                    pod/gcepd-injector                                                          Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-201                           97s         Normal    Created                   pod/gcepd-injector                                                          Created container gcepd-injector\nvolume-201                           97s         Normal    Started                   pod/gcepd-injector                                                          Started container gcepd-injector\nvolume-201                           79s         Normal    Killing                   pod/gcepd-injector                                                          Stopping container gcepd-injector\nvolume-201                           117s        Normal    WaitForFirstConsumer      persistentvolumeclaim/gcepdzwrpl                                            waiting for first consumer to be created before binding\nvolume-201                           113s        Normal    ProvisioningSucceeded     persistentvolumeclaim/gcepdzwrpl                                            Successfully provisioned volume pvc-83fd9abc-a84e-4314-8567-027cac1839a5 using kubernetes.io/gce-pd\nvolume-2730                          3s          Warning   ProvisioningFailed        persistentvolumeclaim/pvc-qxclr                                             storageclass.storage.k8s.io \"volume-2730\" not found\nvolume-4103                          17s         Normal    Scheduled                 pod/exec-volume-test-preprovisionedpv-xzsg                                  Successfully assigned volume-4103/exec-volume-test-preprovisionedpv-xzsg to bootstrap-e2e-minion-group-8mzr\nvolume-4103                          11s         Normal    SuccessfulAttachVolume    pod/exec-volume-test-preprovisionedpv-xzsg                                  AttachVolume.Attach succeeded for volume \"gcepd-zznlw\"\nvolume-4103                          5s          Normal    Pulled                    pod/exec-volume-test-preprovisionedpv-xzsg                                  Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nvolume-4103                          5s          Normal    Created                   pod/exec-volume-test-preprovisionedpv-xzsg                                  Created container exec-container-preprovisionedpv-xzsg\nvolume-4103                          5s          Normal    Started                   pod/exec-volume-test-preprovisionedpv-xzsg                                  Started container exec-container-preprovisionedpv-xzsg\nvolume-4103                          21s         Warning   ProvisioningFailed        persistentvolumeclaim/pvc-mcbjh                                             storageclass.storage.k8s.io \"volume-4103\" not found\nvolume-5822                          4m23s       Normal    Pulled                    pod/hostexec-bootstrap-e2e-minion-group-451g-skplv                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolume-5822                          4m23s       Normal    Created                   pod/hostexec-bootstrap-e2e-minion-group-451g-skplv                          Created container agnhost\nvolume-5822                          4m21s       Normal    Started                   pod/hostexec-bootstrap-e2e-minion-group-451g-skplv                          Started container agnhost\nvolume-5822                          6s          Normal    Killing                   pod/hostexec-bootstrap-e2e-minion-group-451g-skplv                          Stopping container agnhost\nvolume-5822                          61s         Normal    Pulled                    pod/local-client                                                            Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-5822                          61s         Normal    Created                   pod/local-client                                                            Created container local-client\nvolume-5822                          58s         Normal    Started                   pod/local-client                                                            Started container local-client\nvolume-5822                          30s         Normal    Killing                   pod/local-client                                                            Stopping container local-client\nvolume-5822                          3m16s       Normal    Pulled                    pod/local-injector                                                          Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-5822                          3m16s       Normal    Created                   pod/local-injector                                                          Created container local-injector\nvolume-5822                          3m9s        Normal    Started                   pod/local-injector                                                          Started container local-injector\nvolume-5822                          108s        Normal    Killing                   pod/local-injector                                                          Stopping container local-injector\nvolume-5822                          3m36s       Warning   ProvisioningFailed        persistentvolumeclaim/pvc-h9sj9                                             storageclass.storage.k8s.io \"volume-5822\" not found\nvolume-6225                          65s         Normal    Scheduled                 pod/exec-volume-test-inlinevolume-z6zr                                      Successfully assigned volume-6225/exec-volume-test-inlinevolume-z6zr to bootstrap-e2e-minion-group-8mzr\nvolume-6225                          60s         Normal    SuccessfulAttachVolume    pod/exec-volume-test-inlinevolume-z6zr                                      AttachVolume.Attach succeeded for volume \"vol1\"\nvolume-6225                          51s         Normal    Pulled                    pod/exec-volume-test-inlinevolume-z6zr                                      Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nvolume-6225                          51s         Normal    Created                   pod/exec-volume-test-inlinevolume-z6zr                                      Created container exec-container-inlinevolume-z6zr\nvolume-6225                          50s         Normal    Started                   pod/exec-volume-test-inlinevolume-z6zr                                      Started container exec-container-inlinevolume-z6zr\nvolume-expand-7251                   74s         Warning   FailedMount               pod/csi-hostpath-attacher-0                                                 MountVolume.SetUp failed for volume \"csi-attacher-token-42vmq\" : failed to sync secret cache: timed out waiting for the condition\nvolume-expand-7251                   69s         Normal    Pulling                   pod/csi-hostpath-attacher-0                                                 Pulling image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\nvolume-expand-7251                   62s         Normal    Pulled                    pod/csi-hostpath-attacher-0                                                 Successfully pulled image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\nvolume-expand-7251                   62s         Normal    Created                   pod/csi-hostpath-attacher-0                                                 Created container csi-attacher\nvolume-expand-7251                   62s         Normal    Started                   pod/csi-hostpath-attacher-0                                                 Started container csi-attacher\nvolume-expand-7251                   16s         Normal    Killing                   pod/csi-hostpath-attacher-0                                                 Stopping container csi-attacher\nvolume-expand-7251                   86s         Warning   FailedCreate              statefulset/csi-hostpath-attacher                                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-7251                   76s         Normal    SuccessfulCreate          statefulset/csi-hostpath-attacher                                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nvolume-expand-7251                   72s         Normal    Pulling                   pod/csi-hostpath-provisioner-0                                              Pulling image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\nvolume-expand-7251                   64s         Normal    Pulled                    pod/csi-hostpath-provisioner-0                                              Successfully pulled image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\nvolume-expand-7251                   63s         Normal    Created                   pod/csi-hostpath-provisioner-0                                              Created container csi-provisioner\nvolume-expand-7251                   63s         Normal    Started                   pod/csi-hostpath-provisioner-0                                              Started container csi-provisioner\nvolume-expand-7251                   13s         Normal    Killing                   pod/csi-hostpath-provisioner-0                                              Stopping container csi-provisioner\nvolume-expand-7251                   81s         Warning   FailedCreate              statefulset/csi-hostpath-provisioner                                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-7251                   75s         Normal    SuccessfulCreate          statefulset/csi-hostpath-provisioner                                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nvolume-expand-7251                   69s         Normal    Pulling                   pod/csi-hostpath-resizer-0                                                  Pulling image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\nvolume-expand-7251                   62s         Normal    Pulled                    pod/csi-hostpath-resizer-0                                                  Successfully pulled image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\nvolume-expand-7251                   12s         Normal    Created                   pod/csi-hostpath-resizer-0                                                  Created container csi-resizer\nvolume-expand-7251                   62s         Normal    Started                   pod/csi-hostpath-resizer-0                                                  Started container csi-resizer\nvolume-expand-7251                   12s         Normal    Pulled                    pod/csi-hostpath-resizer-0                                                  Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\nvolume-expand-7251                   12s         Warning   FailedMount               pod/csi-hostpath-resizer-0                                                  MountVolume.SetUp failed for volume \"csi-resizer-token-2t8zn\" : secret \"csi-resizer-token-2t8zn\" not found\nvolume-expand-7251                   11s         Warning   Failed                    pod/csi-hostpath-resizer-0                                                  Error: failed to start container \"csi-resizer\": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused \"process_linux.go:430: container init caused \\\"rootfs_linux.go:58: mounting \\\\\\\"/var/lib/kubelet/pods/8241210e-ffb2-470e-b750-5a0592faef4d/volumes/kubernetes.io~secret/csi-resizer-token-2t8zn\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/overlay2/29cea07f43ef82846f28361652e3304782e89bb177489f7b0a070e941c87780a/merged\\\\\\\" at \\\\\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\\\\\" caused \\\\\\\"stat /var/lib/kubelet/pods/8241210e-ffb2-470e-b750-5a0592faef4d/volumes/kubernetes.io~secret/csi-resizer-token-2t8zn: no such file or directory\\\\\\\"\\\"\": unknown\nvolume-expand-7251                   77s         Warning   FailedCreate              statefulset/csi-hostpath-resizer                                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-7251                   75s         Normal    SuccessfulCreate          statefulset/csi-hostpath-resizer                                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nvolume-expand-7251                   90s         Normal    Pulling                   pod/csi-hostpathplugin-0                                                    Pulling image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\nvolume-expand-7251                   87s         Normal    Pulled                    pod/csi-hostpathplugin-0                                                    Successfully pulled image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\nvolume-expand-7251                   86s         Normal    Created                   pod/csi-hostpathplugin-0                                                    Created container node-driver-registrar\nvolume-expand-7251                   86s         Normal    Started                   pod/csi-hostpathplugin-0                                                    Started container node-driver-registrar\nvolume-expand-7251                   86s         Normal    Pulling                   pod/csi-hostpathplugin-0                                                    Pulling image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nvolume-expand-7251                   79s         Normal    Pulled                    pod/csi-hostpathplugin-0                                                    Successfully pulled image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nvolume-expand-7251                   79s         Normal    Created                   pod/csi-hostpathplugin-0                                                    Created container hostpath\nvolume-expand-7251                   79s         Normal    Started                   pod/csi-hostpathplugin-0                                                    Started container hostpath\nvolume-expand-7251                   79s         Normal    Pulling                   pod/csi-hostpathplugin-0                                                    Pulling image \"quay.io/k8scsi/livenessprobe:v1.1.0\"\nvolume-expand-7251                   77s         Normal    Pulled                    pod/csi-hostpathplugin-0                                                    Successfully pulled image \"quay.io/k8scsi/livenessprobe:v1.1.0\"\nvolume-expand-7251                   77s         Normal    Created                   pod/csi-hostpathplugin-0                                                    Created container liveness-probe\nvolume-expand-7251                   77s         Normal    Started                   pod/csi-hostpathplugin-0                                                    Started container liveness-probe\nvolume-expand-7251                   15s         Normal    Killing                   pod/csi-hostpathplugin-0                                                    Stopping container node-driver-registrar\nvolume-expand-7251                   15s         Normal    Killing                   pod/csi-hostpathplugin-0                                                    Stopping container liveness-probe\nvolume-expand-7251                   15s         Normal    Killing                   pod/csi-hostpathplugin-0                                                    Stopping container hostpath\nvolume-expand-7251                   13s         Warning   FailedPreStopHook         pod/csi-hostpathplugin-0                                                    Exec lifecycle hook ([/bin/sh -c rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock]) for Container \"node-driver-registrar\" in Pod \"csi-hostpathplugin-0_volume-expand-7251(58b647e5-d040-4940-88a4-463f53c17519)\" failed - error: command '/bin/sh -c rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock' exited with 126: , message: \"OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused \\\"exec: \\\\\\\"/bin/sh\\\\\\\": stat /bin/sh: no such file or directory\\\": unknown\\r\\n\"\nvolume-expand-7251                   93s         Normal    SuccessfulCreate          statefulset/csi-hostpathplugin                                              create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nvolume-expand-7251                   65s         Normal    ExternalProvisioning      persistentvolumeclaim/csi-hostpathvqjqv                                     waiting for a volume to be created, either by external provisioner \"csi-hostpath-volume-expand-7251\" or manually created by system administrator\nvolume-expand-7251                   61s         Normal    Provisioning              persistentvolumeclaim/csi-hostpathvqjqv                                     External provisioner is provisioning volume for claim \"volume-expand-7251/csi-hostpathvqjqv\"\nvolume-expand-7251                   61s         Normal    ProvisioningSucceeded     persistentvolumeclaim/csi-hostpathvqjqv                                     Successfully provisioned volume pvc-dcd653cb-71af-4aa7-a0e9-d7df578b1fbc\nvolume-expand-7251                   76s         Warning   FailedMount               pod/csi-snapshotter-0                                                       MountVolume.SetUp failed for volume \"csi-snapshotter-token-5t5fh\" : failed to sync secret cache: timed out waiting for the condition\nvolume-expand-7251                   73s         Normal    Pulling                   pod/csi-snapshotter-0                                                       Pulling image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\"\nvolume-expand-7251                   64s         Normal    Pulled                    pod/csi-snapshotter-0                                                       Successfully pulled image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\"\nvolume-expand-7251                   12s         Normal    Created                   pod/csi-snapshotter-0                                                       Created container csi-snapshotter\nvolume-expand-7251                   10s         Normal    Started                   pod/csi-snapshotter-0                                                       Started container csi-snapshotter\nvolume-expand-7251                   13s         Normal    Pulled                    pod/csi-snapshotter-0                                                       Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nvolume-expand-7251                   9s          Warning   FailedMount               pod/csi-snapshotter-0                                                       MountVolume.SetUp failed for volume \"csi-snapshotter-token-5t5fh\" : secret \"csi-snapshotter-token-5t5fh\" not found\nvolume-expand-7251                   77s         Warning   FailedCreate              statefulset/csi-snapshotter                                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter failed error: pods \"csi-snapshotter-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-expand-7251                   77s         Normal    SuccessfulCreate          statefulset/csi-snapshotter                                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nwebhook-1198                         31s         Normal    Scheduled                 pod/sample-webhook-deployment-5f65f8c764-nkfjn                              Successfully assigned webhook-1198/sample-webhook-deployment-5f65f8c764-nkfjn to bootstrap-e2e-minion-group-8mzr\nwebhook-1198                         26s         Normal    Pulled                    pod/sample-webhook-deployment-5f65f8c764-nkfjn                              Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nwebhook-1198                         25s         Normal    Created                   pod/sample-webhook-deployment-5f65f8c764-nkfjn                              Created container sample-webhook\nwebhook-1198                         24s         Normal    Started                   pod/sample-webhook-deployment-5f65f8c764-nkfjn                              Started container sample-webhook\nwebhook-1198                         31s         Normal    SuccessfulCreate          replicaset/sample-webhook-deployment-5f65f8c764                             Created pod: sample-webhook-deployment-5f65f8c764-nkfjn\nwebhook-1198                         32s         Normal    ScalingReplicaSet         deployment/sample-webhook-deployment                                        Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1\nwebhook-2599                         10s         Warning   ClusterIPNotAllocated     service/e2e-test-webhook                                                    Cluster IP 10.0.11.183 is not allocated; repairing\nwebhook-2599                         20s         Normal    Scheduled                 pod/sample-webhook-deployment-5f65f8c764-npqwl                              Successfully assigned webhook-2599/sample-webhook-deployment-5f65f8c764-npqwl to bootstrap-e2e-minion-group-8mzr\nwebhook-2599                         18s         Normal    Pulled                    pod/sample-webhook-deployment-5f65f8c764-npqwl                              Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nwebhook-2599                         18s         Normal    Created                   pod/sample-webhook-deployment-5f65f8c764-npqwl                              Created container sample-webhook\nwebhook-2599                         18s         Normal    Started                   pod/sample-webhook-deployment-5f65f8c764-npqwl                              Started container sample-webhook\nwebhook-2599                         20s         Normal    SuccessfulCreate          replicaset/sample-webhook-deployment-5f65f8c764                             Created pod: sample-webhook-deployment-5f65f8c764-npqwl\nwebhook-2599                         21s         Normal    ScalingReplicaSet         deployment/sample-webhook-deployment                                        Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1\nwebhook-4375                         27s         Normal    Scheduled                 pod/sample-webhook-deployment-5f65f8c764-hbk98                              Successfully assigned webhook-4375/sample-webhook-deployment-5f65f8c764-hbk98 to bootstrap-e2e-minion-group-8mzr\nwebhook-4375                         24s         Normal    Pulled                    pod/sample-webhook-deployment-5f65f8c764-hbk98                              Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nwebhook-4375                         24s         Normal    Created                   pod/sample-webhook-deployment-5f65f8c764-hbk98                              Created container sample-webhook\nwebhook-4375                         23s         Normal    Started                   pod/sample-webhook-deployment-5f65f8c764-hbk98                              Started container sample-webhook\nwebhook-4375                         27s         Normal    SuccessfulCreate          replicaset/sample-webhook-deployment-5f65f8c764                             Created pod: sample-webhook-deployment-5f65f8c764-hbk98\nwebhook-4375                         27s         Normal    ScalingReplicaSet         deployment/sample-webhook-deployment                                        Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1\nwebhook-4375                         18s         Normal    Scheduled                 pod/to-be-attached-pod                                                      Successfully assigned webhook-4375/to-be-attached-pod to bootstrap-e2e-minion-group-zb1j\nwebhook-4375                         15s         Normal    Pulled                    pod/to-be-attached-pod                                                      Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nwebhook-4375                         15s         Normal    Created                   pod/to-be-attached-pod                                                      Created container container1\nwebhook-4375                         14s         Normal    Started                   pod/to-be-attached-pod                                                      Started container container1\nwebhook-6014                         38s         Normal    Scheduled                 pod/sample-webhook-deployment-5f65f8c764-rqmkq                              Successfully assigned webhook-6014/sample-webhook-deployment-5f65f8c764-rqmkq to bootstrap-e2e-minion-group-451g\nwebhook-6014                         31s         Normal    Pulled                    pod/sample-webhook-deployment-5f65f8c764-rqmkq                              Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nwebhook-6014                         30s         Normal    Created                   pod/sample-webhook-deployment-5f65f8c764-rqmkq                              Created container sample-webhook\nwebhook-6014                         29s         Normal    Started                   pod/sample-webhook-deployment-5f65f8c764-rqmkq                              Started container sample-webhook\nwebhook-6014                         38s         Normal    SuccessfulCreate          replicaset/sample-webhook-deployment-5f65f8c764                             Created pod: sample-webhook-deployment-5f65f8c764-rqmkq\nwebhook-6014                         38s         Normal    ScalingReplicaSet         deployment/sample-webhook-deployment                                        Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1\nwebhook-9403                         51s         Normal    Scheduled                 pod/sample-webhook-deployment-5f65f8c764-z888c                              Successfully assigned webhook-9403/sample-webhook-deployment-5f65f8c764-z888c to bootstrap-e2e-minion-group-8mzr\nwebhook-9403                         49s         Normal    Pulled                    pod/sample-webhook-deployment-5f65f8c764-z888c                              Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nwebhook-9403                         49s         Normal    Created                   pod/sample-webhook-deployment-5f65f8c764-z888c                              Created container sample-webhook\nwebhook-9403                         49s         Normal    Started                   pod/sample-webhook-deployment-5f65f8c764-z888c                              Started container sample-webhook\nwebhook-9403                         52s         Normal    SuccessfulCreate          replicaset/sample-webhook-deployment-5f65f8c764                             Created pod: sample-webhook-deployment-5f65f8c764-z888c\nwebhook-9403                         52s         Normal    ScalingReplicaSet         deployment/sample-webhook-deployment                                        Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1\nwebhook-9403                         41s         Normal    Scheduled                 pod/webhook-to-be-mutated                                                   Successfully assigned webhook-9403/webhook-to-be-mutated to bootstrap-e2e-minion-group-451g\n"
Jan 16 09:11:24.556: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.211.53 --kubeconfig=/workspace/.kube/config get replicationcontrollers --all-namespaces'
Jan 16 09:11:25.873: INFO: stderr: ""
Jan 16 09:11:25.873: INFO: stdout: "NAMESPACE          NAME                      DESIRED   CURRENT   READY   AGE\nkubectl-2255       update-demo-kitten        1         1         1       13s\nkubectl-2255       update-demo-nautilus      1         1         1       37s\nkubectl-2557       rc1hgbzcc6kgv             1         0         0       1s\nservices-8757      nodeport-update-service   2         2         2       61s\nsvc-latency-9315   svc-latency-rc            1         1         1       2m20s\n"
Jan 16 09:11:26.914: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82


... skipping 24525 lines ...
• [SLOW TEST:11.234 seconds]
[k8s.io] [sig-node] Security Context
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:68
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":17,"skipped":106,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":110,"failed":0}
[BeforeEach] [k8s.io] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:18:54.199: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-8403
... skipping 135 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a container with runAsNonRoot
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:98
    should not run without a specified user ID
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:153
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":11,"skipped":57,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:19:10.797: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 269 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on 0.0.0.0
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:444
    should support forwarding over websockets
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:460
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":18,"skipped":119,"failed":0}
[BeforeEach] [sig-scheduling] Multi-AZ Cluster Volumes [sig-storage]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:19:09.892: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename multi-az
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in multi-az-4161
... skipping 15 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite_volumes.go:53

  Zone count is 1, only run for multi-zone clusters, skipping test

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite_volumes.go:50
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets","total":-1,"completed":13,"skipped":45,"failed":0}
[BeforeEach] [sig-api-machinery] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:19:12.376: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-7587
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name secret-emptykey-test-90c25d7b-4ebc-4fe2-9fb3-306da25b2d7d
[AfterEach] [sig-api-machinery] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:19:14.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7587" for this suite.
... skipping 139 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":15,"skipped":111,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:19:18.763: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 13 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152

      Driver emptydir doesn't support PreprovisionedPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume","total":-1,"completed":15,"skipped":88,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:18:21.766: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 60 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":16,"skipped":88,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 153 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume one after the other
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":22,"skipped":96,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:19:20.806: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 51 lines ...
• [SLOW TEST:12.972 seconds]
[sig-storage] Projected secret
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:89
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":19,"skipped":104,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:19:24.949: INFO: Only supported for providers [aws] (not gce)
... skipping 30 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 174 lines ...
• [SLOW TEST:54.582 seconds]
[sig-autoscaling] DNS horizontal autoscaling
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/framework.go:23
  [DisabledForLargeClusters] kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/dns_autoscaling.go:168
------------------------------
{"msg":"PASSED [sig-autoscaling] DNS horizontal autoscaling [DisabledForLargeClusters] kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios","total":-1,"completed":17,"skipped":89,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:19:27.489: INFO: Driver local doesn't support ntfs -- skipping
... skipping 59 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217

      Driver supports dynamic provisioning, skipping InlineVolume pattern

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:688
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":16,"skipped":73,"failed":0}
[BeforeEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:18:32.498: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-7812
... skipping 22 lines ...
• [SLOW TEST:55.455 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod resolv.conf
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:455
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":17,"skipped":73,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:19:27.956: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 38 lines ...
• [SLOW TEST:61.852 seconds]
[k8s.io] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":82,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:19:31.285: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:19:31.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 25 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Only supported for providers [vsphere] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1383
------------------------------
... skipping 19 lines ...
• [SLOW TEST:12.789 seconds]
[k8s.io] InitContainer [NodeConformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should invoke init containers on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":17,"skipped":90,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:19:32.162: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 69 lines ...
STEP: Deleting the previously created pod
Jan 16 09:17:13.408: INFO: Deleting pod "pvc-volume-tester-fp2tc" in namespace "csi-mock-volumes-167"
Jan 16 09:17:13.501: INFO: Wait up to 5m0s for pod "pvc-volume-tester-fp2tc" to be fully deleted
STEP: Checking CSI driver logs
Jan 16 09:17:20.008: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-167","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-167","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-167","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-31005b0f-6766-407e-a48b-1e519e4db6e0","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-31005b0f-6766-407e-a48b-1e519e4db6e0"}}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-167","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-167","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-31005b0f-6766-407e-a48b-1e519e4db6e0","storage.kubernetes.io/csiProvisionerIdentity":"1579166204034-8081-csi-mock-csi-mock-volumes-167"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-31005b0f-6766-407e-a48b-1e519e4db6e0/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-31005b0f-6766-407e-a48b-1e519e4db6e0","storage.kubernetes.io/csiProvisionerIdentity":"1579166204034-8081-csi-mock-csi-mock-volumes-167"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-31005b0f-6766-407e-a48b-1e519e4db6e0/globalmount","target_path":"/var/lib/kubelet/pods/5e9c671b-7559-419e-8435-71435791a1b3/volumes/kubernetes.io~csi/pvc-31005b0f-6766-407e-a48b-1e519e4db6e0/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-31005b0f-6766-407e-a48b-1e519e4db6e0","storage.kubernetes.io/csiProvisionerIdentity":"1579166204034-8081-csi-mock-csi-mock-volumes-167"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/5e9c671b-7559-419e-8435-71435791a1b3/volumes/kubernetes.io~csi/pvc-31005b0f-6766-407e-a48b-1e519e4db6e0/mount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-31005b0f-6766-407e-a48b-1e519e4db6e0/globalmount"},"Response":{},"Error":""}

Jan 16 09:17:20.009: INFO: Found NodeUnpublishVolume: {Method:/csi.v1.Node/NodeUnpublishVolume Request:{VolumeContext:map[]}}
STEP: Deleting pod pvc-volume-tester-fp2tc
Jan 16 09:17:20.009: INFO: Deleting pod "pvc-volume-tester-fp2tc" in namespace "csi-mock-volumes-167"
STEP: Deleting claim pvc-tf7hk
Jan 16 09:17:20.785: INFO: Waiting up to 2m0s for PersistentVolume pvc-31005b0f-6766-407e-a48b-1e519e4db6e0 to get deleted
... skipping 89 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:296
    should not be passed when CSIDriver does not exist
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:346
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":11,"skipped":66,"failed":0}

S
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":18,"skipped":110,"failed":0}
[BeforeEach] [k8s.io] [sig-node] kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:18:17.143: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename kubelet
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-2326
... skipping 105 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  [k8s.io] [sig-node] Clean up pods on node
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    kubelet should be able to delete 10 pods per node in 1m0s.
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:340
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] kubelet [k8s.io] [sig-node] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":19,"skipped":110,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:19:37.518: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 13 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":110,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:19:07.190: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-4854
... skipping 56 lines ...
Jan 16 09:18:55.498: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8199.svc.cluster.local from pod dns-8199/dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7: the server could not find the requested resource (get pods dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7)
Jan 16 09:18:55.751: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8199.svc.cluster.local from pod dns-8199/dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7: the server could not find the requested resource (get pods dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7)
Jan 16 09:18:59.121: INFO: Unable to read jessie_udp@dns-test-service.dns-8199.svc.cluster.local from pod dns-8199/dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7: the server could not find the requested resource (get pods dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7)
Jan 16 09:18:59.435: INFO: Unable to read jessie_tcp@dns-test-service.dns-8199.svc.cluster.local from pod dns-8199/dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7: the server could not find the requested resource (get pods dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7)
Jan 16 09:18:59.607: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8199.svc.cluster.local from pod dns-8199/dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7: the server could not find the requested resource (get pods dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7)
Jan 16 09:18:59.855: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8199.svc.cluster.local from pod dns-8199/dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7: the server could not find the requested resource (get pods dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7)
Jan 16 09:19:00.881: INFO: Lookups using dns-8199/dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7 failed for: [wheezy_udp@dns-test-service.dns-8199.svc.cluster.local wheezy_tcp@dns-test-service.dns-8199.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8199.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8199.svc.cluster.local jessie_udp@dns-test-service.dns-8199.svc.cluster.local jessie_tcp@dns-test-service.dns-8199.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8199.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8199.svc.cluster.local]

Jan 16 09:19:06.445: INFO: Unable to read wheezy_udp@dns-test-service.dns-8199.svc.cluster.local from pod dns-8199/dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7: the server could not find the requested resource (get pods dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7)
Jan 16 09:19:09.275: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8199.svc.cluster.local from pod dns-8199/dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7: the server could not find the requested resource (get pods dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7)
Jan 16 09:19:10.777: INFO: Lookups using dns-8199/dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7 failed for: [wheezy_udp@dns-test-service.dns-8199.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8199.svc.cluster.local]

Jan 16 09:19:11.128: INFO: Unable to read wheezy_udp@dns-test-service.dns-8199.svc.cluster.local from pod dns-8199/dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7: the server could not find the requested resource (get pods dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7)
Jan 16 09:19:14.108: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8199.svc.cluster.local from pod dns-8199/dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7: the server could not find the requested resource (get pods dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7)
Jan 16 09:19:16.147: INFO: Lookups using dns-8199/dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7 failed for: [wheezy_udp@dns-test-service.dns-8199.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8199.svc.cluster.local]

Jan 16 09:19:21.095: INFO: Unable to read wheezy_udp@dns-test-service.dns-8199.svc.cluster.local from pod dns-8199/dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7: the server could not find the requested resource (get pods dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7)
Jan 16 09:19:26.178: INFO: Lookups using dns-8199/dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7 failed for: [wheezy_udp@dns-test-service.dns-8199.svc.cluster.local]

Jan 16 09:19:35.994: INFO: DNS probes using dns-8199/dns-test-d47a3ca9-228d-454c-96f6-4e9c788943b7 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 6 lines ...
• [SLOW TEST:49.630 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":21,"skipped":140,"failed":0}

SSS
------------------------------
[BeforeEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 122 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":16,"skipped":124,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 158 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":18,"skipped":121,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:19:41.591: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:19:41.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 91 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a container with runAsNonRoot
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:98
    should run with an image specified user ID
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:145
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":23,"skipped":102,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:19:45.199: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 140 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI online volume expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:530
    should expand volume without restarting pod if attach=off, nodeExpansion=on
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:545
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":15,"skipped":65,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:19:46.255: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:19:46.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 48 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run CronJob
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:224
    should create a CronJob
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:237
------------------------------
{"msg":"PASSED [sig-cli] Kubectl alpha client Kubectl run CronJob should create a CronJob","total":-1,"completed":17,"skipped":126,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:19:46.752: INFO: Driver azure-disk doesn't support ntfs -- skipping
... skipping 115 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":18,"skipped":102,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:19:48.387: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 51 lines ...
• [SLOW TEST:18.507 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:56
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":12,"skipped":67,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:19:52.325: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 95 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":15,"skipped":70,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:19:54.695: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 123 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:121
    with Single PV - PVC pairs
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:154
      create a PVC and a pre-bound PV: test write access
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:186
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","total":-1,"completed":15,"skipped":52,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 21 lines ...
• [SLOW TEST:17.502 seconds]
[k8s.io] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be updated [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":116,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:19:55.024: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:19:55.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 102 lines ...
• [SLOW TEST:23.002 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks succeed
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:46
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":18,"skipped":97,"failed":0}

SSS
------------------------------
[BeforeEach] [k8s.io] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 27 lines ...
• [SLOW TEST:17.681 seconds]
[k8s.io] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":143,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:19:56.866: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 13 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:163

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":61,"failed":0}
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:19:39.510: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-366
... skipping 13 lines ...
• [SLOW TEST:17.862 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":61,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
... skipping 120 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":20,"skipped":132,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:05.758: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 51 lines ...
• [SLOW TEST:15.212 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:90
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":13,"skipped":71,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:07.544: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 139 lines ...
• [SLOW TEST:83.497 seconds]
[sig-api-machinery] Aggregator
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":20,"skipped":168,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:15.409 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":62,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:12.788: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 232 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  Granular Checks: Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:161
    should be able to handle large requests: udp
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:306
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp","total":-1,"completed":23,"skipped":101,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:14.321: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 77 lines ...
STEP: cleaning the environment after gcepd
Jan 16 09:19:50.782: INFO: Deleting pod "gcepd-client" in namespace "volume-6859"
Jan 16 09:19:51.139: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
STEP: Deleting pv and pvc
Jan 16 09:20:03.831: INFO: Deleting PersistentVolumeClaim "pvc-bsrbg"
Jan 16 09:20:04.406: INFO: Deleting PersistentVolume "gcepd-fpb5h"
Jan 16 09:20:06.309: INFO: error deleting PD "bootstrap-e2e-6425351d-81b0-4470-a325-b0e0c26d36a5": googleapi: Error 400: The disk resource 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/disks/bootstrap-e2e-6425351d-81b0-4470-a325-b0e0c26d36a5' is already being used by 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-7fqk', resourceInUseByAnotherResource
Jan 16 09:20:06.309: INFO: Couldn't delete PD "bootstrap-e2e-6425351d-81b0-4470-a325-b0e0c26d36a5", sleeping 5s: googleapi: Error 400: The disk resource 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/disks/bootstrap-e2e-6425351d-81b0-4470-a325-b0e0c26d36a5' is already being used by 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-7fqk', resourceInUseByAnotherResource
Jan 16 09:20:13.475: INFO: Successfully deleted PD "bootstrap-e2e-6425351d-81b0-4470-a325-b0e0c26d36a5".
Jan 16 09:20:13.475: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:20:13.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-6859" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":17,"skipped":78,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:14.537: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 140 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:19:48.198: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-7835
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to exceed backoffLimit
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:233
STEP: Creating a job
STEP: Ensuring job exceed backofflimit
STEP: Checking that 2 pod created and status is failed
[AfterEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:20:14.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7835" for this suite.


• [SLOW TEST:26.937 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should fail to exceed backoffLimit
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:233
------------------------------
{"msg":"PASSED [sig-apps] Job should fail to exceed backoffLimit","total":-1,"completed":24,"skipped":126,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 37 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1895
    should update a single-container pod's image  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":18,"skipped":128,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":14,"skipped":45,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:19:15.224: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-9231
... skipping 57 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume one after the other
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":15,"skipped":45,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:15.877: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:20:15.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 36 lines ...
      Driver emptydir doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":20,"skipped":84,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:19:15.839: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 66 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":21,"skipped":84,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:16.428: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 13 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:163

      Driver gluster doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":17,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:19:25.498: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 62 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":18,"skipped":64,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:16.571: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 309 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:20:19.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3235" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":129,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:19.970: INFO: Driver local doesn't support ext3 -- skipping
... skipping 77 lines ...
• [SLOW TEST:15.819 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should block an eviction until the PDB is updated to allow it
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:202
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it","total":-1,"completed":21,"skipped":144,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:21.586: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 63 lines ...
• [SLOW TEST:7.840 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":18,"skipped":96,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:22.407: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 15 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":18,"skipped":77,"failed":0}
[BeforeEach] [sig-storage] Zone Support
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:20:19.821: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename zone-support
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in zone-support-6931
... skipping 147 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":20,"skipped":111,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
... skipping 106 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume one after the other
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":22,"skipped":84,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:26.276: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 69 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:20:25.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8674" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure  [Conformance]","total":-1,"completed":22,"skipped":146,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:26.414: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 92 lines ...
STEP: Scaling down replication controller to zero
STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-167 to 0
STEP: Update service to not tolerate unready services
STEP: Check if pod is unreachable
Jan 16 09:20:00.867: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.211.53 --kubeconfig=/workspace/.kube/config exec --namespace=services-167 execpod-t8zfk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-167.svc.cluster.local:80/; test "$?" -ne "0"'
Jan 16 09:20:04.400: INFO: rc: 1
Jan 16 09:20:04.401: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.211.53 --kubeconfig=/workspace/.kube/config exec --namespace=services-167 execpod-t8zfk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-167.svc.cluster.local:80/; test "$?" -ne "0":
Command stdout:
NOW: 2020-01-16 09:20:04.133289576 +0000 UTC m=+44.392162925
stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-167.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1

error:
exit status 1
Jan 16 09:20:06.401: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.211.53 --kubeconfig=/workspace/.kube/config exec --namespace=services-167 execpod-t8zfk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-167.svc.cluster.local:80/; test "$?" -ne "0"'
Jan 16 09:20:07.965: INFO: rc: 1
Jan 16 09:20:07.965: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.211.53 --kubeconfig=/workspace/.kube/config exec --namespace=services-167 execpod-t8zfk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-167.svc.cluster.local:80/; test "$?" -ne "0":
Command stdout:
NOW: 2020-01-16 09:20:07.854349041 +0000 UTC m=+48.113222363
stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-167.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1

error:
exit status 1
Jan 16 09:20:08.401: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.211.53 --kubeconfig=/workspace/.kube/config exec --namespace=services-167 execpod-t8zfk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-167.svc.cluster.local:80/; test "$?" -ne "0"'
Jan 16 09:20:13.054: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-167.svc.cluster.local:80/\n+ test 7 -ne 0\n"
Jan 16 09:20:13.054: INFO: stdout: ""
STEP: Update service to tolerate unready services again
STEP: Check if terminating pod is available through service
Jan 16 09:20:14.117: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.211.53 --kubeconfig=/workspace/.kube/config exec --namespace=services-167 execpod-t8zfk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-167.svc.cluster.local:80/'
Jan 16 09:20:20.750: INFO: rc: 7
Jan 16 09:20:20.750: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.211.53 --kubeconfig=/workspace/.kube/config exec --namespace=services-167 execpod-t8zfk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-167.svc.cluster.local:80/:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-167.svc.cluster.local:80/
command terminated with exit code 7

error:
exit status 7
Jan 16 09:20:22.750: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.211.53 --kubeconfig=/workspace/.kube/config exec --namespace=services-167 execpod-t8zfk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-167.svc.cluster.local:80/'
Jan 16 09:20:24.439: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-167.svc.cluster.local:80/\n"
Jan 16 09:20:24.439: INFO: stdout: "NOW: 2020-01-16 09:20:24.334815675 +0000 UTC m=+64.593689017"
STEP: Remove pods immediately
STEP: stopping RC slow-terminating-unready-pod in namespace services-167
... skipping 9 lines ...
• [SLOW TEST:76.780 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should create endpoints for unready pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1936
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":12,"skipped":69,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:27.594: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 33 lines ...
STEP: creating execpod-noendpoints on node bootstrap-e2e-minion-group-7fqk
Jan 16 09:20:24.362: INFO: Creating new exec pod
Jan 16 09:20:29.526: INFO: waiting up to 30s to connect to no-pods:80
STEP: hitting service no-pods:80 from pod execpod-noendpoints on node bootstrap-e2e-minion-group-7fqk
Jan 16 09:20:29.526: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.211.53 --kubeconfig=/workspace/.kube/config exec --namespace=services-5586 execpod-noendpointsbhnf6 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Jan 16 09:20:31.957: INFO: rc: 1
Jan 16 09:20:31.957: INFO: error contained 'REFUSED', as expected: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.211.53 --kubeconfig=/workspace/.kube/config exec --namespace=services-5586 execpod-noendpointsbhnf6 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect --timeout=3s no-pods:80
REFUSED
command terminated with exit code 1

error:
exit status 1
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:20:31.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5586" for this suite.
[AfterEach] [sig-network] Services
... skipping 3 lines ...
• [SLOW TEST:11.672 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be rejected when no endpoints exist
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2558
------------------------------
{"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":-1,"completed":19,"skipped":75,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:32.268: INFO: Driver vsphere doesn't support ntfs -- skipping
... skipping 94 lines ...
• [SLOW TEST:16.872 seconds]
[k8s.io] Docker Containers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":176,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:33.812: INFO: Only supported for providers [azure] (not gce)
... skipping 109 lines ...
• [SLOW TEST:42.221 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if deleteOptions.OrphanDependents is nil
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:438
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil","total":-1,"completed":19,"skipped":100,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:20:32.278: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-9853
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail when exceeds active deadline
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:137
STEP: Creating a job
STEP: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:20:37.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9853" for this suite.


• [SLOW TEST:6.118 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should fail when exceeds active deadline
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:137
------------------------------
{"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":20,"skipped":80,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 27 lines ...
• [SLOW TEST:19.773 seconds]
[k8s.io] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":135,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:16.187 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":74,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:43.790: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 55 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:43
    nonexistent volume subPath should have the correct mode and owner using FSGroup
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:58
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":21,"skipped":112,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:44.279: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 130 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:466
    that expects a client request
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:467
      should support a client that connects, sends DATA, and disconnects
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:471
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":19,"skipped":94,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:45.043: INFO: Only supported for providers [aws] (not gce)
... skipping 175 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":15,"skipped":75,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:45.549: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 37 lines ...
      Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":122,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:20:13.840: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-webhook-2966
... skipping 30 lines ...
• [SLOW TEST:31.729 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":22,"skipped":122,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:45.584: INFO: Only supported for providers [vsphere] (not gce)
... skipping 92 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":16,"skipped":76,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:45.782: INFO: Only supported for providers [azure] (not gce)
... skipping 186 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  Granular Checks: Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:161
    should be able to handle large requests: http
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:299
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should be able to handle large requests: http","total":-1,"completed":16,"skipped":67,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:46.642: INFO: Only supported for providers [aws] (not gce)
... skipping 70 lines ...
• [SLOW TEST:25.104 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":19,"skipped":100,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:47.524: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 102 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":16,"skipped":113,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:48.541: INFO: Only supported for providers [azure] (not gce)
... skipping 151 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when the NodeLease feature is enabled
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:48
    the kubelet should create and update a lease in the kube-node-lease namespace
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:49
------------------------------
{"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":17,"skipped":71,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 16 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox Pod with hostAliases
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":106,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:53.279: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:20:53.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 80 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152

      Driver local doesn't support InlineVolume -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector","total":-1,"completed":16,"skipped":82,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:20:51.598: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 49 lines ...
• [SLOW TEST:16.401 seconds]
[sig-apps] ReplicaSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":21,"skipped":139,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 46 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when running a container with a new image
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:263
      should be able to pull image [NodeConformance]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:374
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":20,"skipped":106,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:56.960: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:20:56.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 42 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with privileged
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:226
    should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:276
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":20,"skipped":107,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:57.186: INFO: Only supported for providers [aws] (not gce)
... skipping 48 lines ...
• [SLOW TEST:13.937 seconds]
[k8s.io] [sig-node] Security Context
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support container.SecurityContext.RunAsUser [LinuxOnly]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:102
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":23,"skipped":134,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:20:59.528: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:20:59.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 105 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":19,"skipped":120,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:02.780: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 165 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":16,"skipped":56,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:02.872: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:21:02.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 40 lines ...
• [SLOW TEST:83.146 seconds]
[sig-storage] Projected configMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":124,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:04.750: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 249 lines ...
• [SLOW TEST:12.843 seconds]
[k8s.io] [sig-node] Security Context
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:117
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly]","total":-1,"completed":18,"skipped":73,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 79 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":22,"skipped":91,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:09.189: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 113 lines ...
• [SLOW TEST:21.824 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete jobs and pods created by cronjob
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:1078
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob","total":-1,"completed":17,"skipped":132,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 86 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:240
    should not require VolumeAttach for drivers without attachment
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:262
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":19,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:11.502: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:21:11.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 37 lines ...
• [SLOW TEST:6.721 seconds]
[sig-api-machinery] Discovery
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Custom resource should have storage version hash
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:44
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery Custom resource should have storage version hash","total":-1,"completed":20,"skipped":139,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:11.654: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 53 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:43
    volume on tmpfs should have the correct mode using FSGroup
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:70
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":22,"skipped":140,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:11.976: INFO: Distro gci doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:21:11.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 99 lines ...
• [SLOW TEST:28.012 seconds]
[k8s.io] [sig-node] PreStop
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should call prestop when killing a pod  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":14,"skipped":98,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:12.605: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:21:12.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 146 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when creating containers with AllowPrivilegeEscalation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:290
    should allow privilege escalation when true [LinuxOnly] [NodeConformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:361
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":24,"skipped":143,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:15.214: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 94 lines ...
• [SLOW TEST:11.036 seconds]
[sig-node] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":140,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 43 lines ...
Jan 16 09:20:39.305: INFO: Creating resource for dynamic PV
Jan 16 09:20:39.305: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(gcepd) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-expand-255-gcepd-sc7x98b
STEP: creating a claim
STEP: Expanding non-expandable pvc
Jan 16 09:20:40.362: INFO: currentPvcSize {{5368709120 0} {<nil>} 5Gi BinarySI}, newSize {{6442450944 0} {<nil>}  BinarySI}
Jan 16 09:20:41.193: INFO: Error updating pvc gcepd29f8k: PersistentVolumeClaim "gcepd29f8k" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:20:43.726: INFO: Error updating pvc gcepd29f8k: PersistentVolumeClaim "gcepd29f8k" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:20:46.049: INFO: Error updating pvc gcepd29f8k: PersistentVolumeClaim "gcepd29f8k" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:20:48.483: INFO: Error updating pvc gcepd29f8k: PersistentVolumeClaim "gcepd29f8k" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:20:49.917: INFO: Error updating pvc gcepd29f8k: PersistentVolumeClaim "gcepd29f8k" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:20:51.804: INFO: Error updating pvc gcepd29f8k: PersistentVolumeClaim "gcepd29f8k" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:20:53.535: INFO: Error updating pvc gcepd29f8k: PersistentVolumeClaim "gcepd29f8k" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:20:55.877: INFO: Error updating pvc gcepd29f8k: PersistentVolumeClaim "gcepd29f8k" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:20:57.874: INFO: Error updating pvc gcepd29f8k: PersistentVolumeClaim "gcepd29f8k" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:20:59.753: INFO: Error updating pvc gcepd29f8k: PersistentVolumeClaim "gcepd29f8k" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:21:02.069: INFO: Error updating pvc gcepd29f8k: PersistentVolumeClaim "gcepd29f8k" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:21:03.546: INFO: Error updating pvc gcepd29f8k: PersistentVolumeClaim "gcepd29f8k" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:21:05.471: INFO: Error updating pvc gcepd29f8k: PersistentVolumeClaim "gcepd29f8k" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:21:07.920: INFO: Error updating pvc gcepd29f8k: PersistentVolumeClaim "gcepd29f8k" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:21:10.645: INFO: Error updating pvc gcepd29f8k: PersistentVolumeClaim "gcepd29f8k" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:21:12.318: INFO: Error updating pvc gcepd29f8k: PersistentVolumeClaim "gcepd29f8k" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 09:21:13.608: INFO: Error updating pvc gcepd29f8k: PersistentVolumeClaim "gcepd29f8k" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
STEP: Deleting pvc
Jan 16 09:21:13.608: INFO: Deleting PersistentVolumeClaim "gcepd29f8k"
STEP: Deleting sc
Jan 16 09:21:15.362: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 8 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:147
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":23,"skipped":101,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:16.139: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:21:16.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 82 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217

      Driver supports dynamic provisioning, skipping InlineVolume pattern

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:688
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":110,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:19:37.970: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 80 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":22,"skipped":110,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 114 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":16,"skipped":50,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 61 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1297
    should create services for rc  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":21,"skipped":107,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PVC Protection
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 33 lines ...
• [SLOW TEST:41.925 seconds]
[sig-storage] PVC Protection
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that PVC in active use by a pod is not removed immediately
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:118
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":-1,"completed":22,"skipped":116,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:26.215: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 220 lines ...
• [SLOW TEST:16.340 seconds]
[sig-node] Downward API
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":105,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
... skipping 88 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":21,"skipped":81,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:29.661: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 142 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":23,"skipped":155,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:29.974: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 77 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:21:29.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-7217" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":23,"skipped":121,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:30.458: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:21:30.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 70 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:63
[It] should support unsafe sysctls which are actually whitelisted
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:110
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:11.150 seconds]
[k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support unsafe sysctls which are actually whitelisted
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:110
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted","total":-1,"completed":23,"skipped":117,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:31.255: INFO: Driver local doesn't support ntfs -- skipping
... skipping 159 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:162

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout","total":-1,"completed":16,"skipped":145,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:21:26.522: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-9471
... skipping 12 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    getting/updating/patching custom resource definition status sub-resource works  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":17,"skipped":145,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:34.562: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:21:34.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 322 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  Granular Checks: Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:161
    should update endpoints: http
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:217
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should update endpoints: http","total":-1,"completed":11,"skipped":68,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:38.078: INFO: Distro gci doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:21:38.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 144 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":23,"skipped":145,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:38.952: INFO: Distro gci doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:21:38.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 93 lines ...
• [SLOW TEST:16.998 seconds]
[sig-storage] Projected secret
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":108,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:41.329: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 39 lines ...
• [SLOW TEST:14.896 seconds]
[sig-auth] Certificates API
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should support building a client with a CSR
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:39
------------------------------
{"msg":"PASSED [sig-auth] Certificates API should support building a client with a CSR","total":-1,"completed":16,"skipped":108,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:43.868: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 38 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command in a pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":126,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:46.007: INFO: Driver gluster doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:21:46.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 88 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":23,"skipped":113,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:46.858: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:21:46.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 63 lines ...
  Zone count is 1, only run for multi-zone clusters, skipping test

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:52
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":19,"skipped":74,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:21:16.129: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-8146
... skipping 33 lines ...
• [SLOW TEST:31.921 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":20,"skipped":74,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:48.054: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 131 lines ...
• [SLOW TEST:17.708 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":24,"skipped":125,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:48.178: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 337 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  Granular Checks: Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:161
    should update endpoints: udp
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:228
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should update endpoints: udp","total":-1,"completed":9,"skipped":43,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:49.599: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 76 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:43
    new files should be created with FSGroup ownership when container is non-root
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:54
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":24,"skipped":155,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:51.229: INFO: Driver azure-disk doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:21:51.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 67 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl copy
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1422
    should copy a file from a running Pod
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1441
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod","total":-1,"completed":18,"skipped":149,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:53.544: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 181 lines ...
• [SLOW TEST:42.554 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":20,"skipped":66,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:54.063: INFO: Driver nfs doesn't support ext4 -- skipping
... skipping 127 lines ...
• [SLOW TEST:32.566 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":24,"skipped":117,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:54.331: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 65 lines ...
• [SLOW TEST:17.679 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":12,"skipped":69,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:55.761: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:21:55.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 206 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:419
    should expand volume by restarting pod if attach=on, nodeExpansion=on
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:448
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":25,"skipped":134,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:21:56.478: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:21:56.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 171 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:22:00.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6354" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":26,"skipped":139,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:22:00.394: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:22:00.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 3 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Only supported for providers [aws] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1645
------------------------------
... skipping 22 lines ...
• [SLOW TEST:27.780 seconds]
[k8s.io] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support pod readiness gates [NodeFeature:PodReadinessGate]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:789
------------------------------
{"msg":"PASSED [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":18,"skipped":138,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:22:03.084: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 59 lines ...
• [SLOW TEST:9.589 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  custom resource defaulting for requests and from storage works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":17,"skipped":119,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:22:06.981: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 210 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:240
    should preserve attachment policy when no CSIDriver present
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:262
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present","total":-1,"completed":17,"skipped":58,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 145 lines ...
• [SLOW TEST:7.948 seconds]
[sig-instrumentation] Cadvisor
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/common/framework.go:23
  should be healthy on every node.
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/monitoring/cadvisor.go:42
------------------------------
{"msg":"PASSED [sig-instrumentation] Cadvisor should be healthy on every node.","total":-1,"completed":18,"skipped":128,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:22:14.940: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 86 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (block volmode)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":14,"skipped":83,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:22:16.685: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:22:16.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 66 lines ...
• [SLOW TEST:23.136 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:151
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage =\u003e should allow an eviction","total":-1,"completed":25,"skipped":123,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:22:17.480: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 232 lines ...
• [SLOW TEST:26.154 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":140,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:22:26.555: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 98 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (ext3)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume","total":-1,"completed":21,"skipped":142,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:22:35.418: INFO: Driver hostPath doesn't support ext4 -- skipping
... skipping 169 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":25,"skipped":150,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:22:36.438: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 39 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":22,"skipped":90,"failed":0}
[BeforeEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:21:53.829: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-7406
... skipping 32 lines ...
• [SLOW TEST:42.774 seconds]
[k8s.io] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":90,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:22:36.605: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:22:36.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 212 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (ext3)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ext3)] volumes should store data","total":-1,"completed":22,"skipped":186,"failed":0}

SSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":23,"skipped":144,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:22:07.909: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-9136
... skipping 12 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":24,"skipped":144,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:22:45.073: INFO: Driver vsphere doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:22:45.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 91 lines ...
• [SLOW TEST:19.629 seconds]
[sig-node] Downward API
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":145,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:22:46.193: INFO: Only supported for providers [vsphere] (not gce)
... skipping 42 lines ...
• [SLOW TEST:10.143 seconds]
[sig-api-machinery] Watchers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":24,"skipped":103,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:22:46.772: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:22:46.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 57 lines ...
Jan 16 09:21:40.669: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-ghrnd] to have phase Bound
Jan 16 09:21:41.030: INFO: PersistentVolumeClaim pvc-ghrnd found but phase is Pending instead of Bound.
Jan 16 09:21:43.542: INFO: PersistentVolumeClaim pvc-ghrnd found and phase=Bound (2.873243789s)
Jan 16 09:21:43.542: INFO: Waiting up to 3m0s for PersistentVolume gce-nzpln to have phase Bound
Jan 16 09:21:43.852: INFO: PersistentVolume gce-nzpln found and phase=Bound (309.973034ms)
STEP: Creating the Client Pod
[It] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:124
STEP: Deleting the Claim
Jan 16 09:22:22.357: INFO: Deleting PersistentVolumeClaim "pvc-ghrnd"
STEP: Deleting the Pod
Jan 16 09:22:23.311: INFO: Deleting pod "pvc-tester-zr766" in namespace "pv-2625"
Jan 16 09:22:23.920: INFO: Wait up to 5m0s for pod "pvc-tester-zr766" to be fully deleted
... skipping 14 lines ...
Jan 16 09:22:48.740: INFO: Successfully deleted PD "bootstrap-e2e-0c8260bf-c64b-4198-8180-e4334840035b".


• [SLOW TEST:74.604 seconds]
[sig-storage] PersistentVolumes GCEPD
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:124
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes GCEPD should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach","total":-1,"completed":24,"skipped":170,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 76 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":25,"skipped":156,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:22:52.832: INFO: Driver vsphere doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:22:52.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 78 lines ...
• [SLOW TEST:36.159 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":26,"skipped":137,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":147,"failed":0}
[BeforeEach] [sig-apps] DisruptionController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:22:51.357: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename disruption
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-6472
... skipping 7 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:22:53.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-6472" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget","total":-1,"completed":23,"skipped":147,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:22:54.542: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 48 lines ...
• [SLOW TEST:6.939 seconds]
[sig-api-machinery] Watchers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":24,"skipped":155,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:23:01.489: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:23:01.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 26 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:211
Jan 16 09:22:52.244: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-ec171f48-9dc9-4891-951b-d893aa66d1d8" in namespace "security-context-test-7199" to be "success or failure"
Jan 16 09:22:52.401: INFO: Pod "busybox-readonly-true-ec171f48-9dc9-4891-951b-d893aa66d1d8": Phase="Pending", Reason="", readiness=false. Elapsed: 156.846884ms
Jan 16 09:22:54.664: INFO: Pod "busybox-readonly-true-ec171f48-9dc9-4891-951b-d893aa66d1d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.420129586s
Jan 16 09:22:57.234: INFO: Pod "busybox-readonly-true-ec171f48-9dc9-4891-951b-d893aa66d1d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.98992123s
Jan 16 09:22:59.817: INFO: Pod "busybox-readonly-true-ec171f48-9dc9-4891-951b-d893aa66d1d8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.573586464s
Jan 16 09:23:02.080: INFO: Pod "busybox-readonly-true-ec171f48-9dc9-4891-951b-d893aa66d1d8": Phase="Failed", Reason="", readiness=false. Elapsed: 9.836466956s
Jan 16 09:23:02.080: INFO: Pod "busybox-readonly-true-ec171f48-9dc9-4891-951b-d893aa66d1d8" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:23:02.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7199" for this suite.

... skipping 3 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with readOnlyRootFilesystem
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:165
    should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:211
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":25,"skipped":110,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 28 lines ...
• [SLOW TEST:19.798 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":149,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:23:04.892: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 30 lines ...
[sig-storage] CSI Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver "csi-hostpath" does not support topology - skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:96
------------------------------
... skipping 175 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":18,"skipped":62,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:23:06.154: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:23:06.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 12 lines ...
      Only supported for providers [vsphere] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1383
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":80,"failed":0}
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:23:05.996: INFO: >>> kubeConfig: /workspace/.kube/config
[It] watch and report errors with accept "application/vnd.kubernetes.protobuf"
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/protocol.go:45
Jan 16 09:23:05.998: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-api-machinery] client-go should negotiate
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:23:06.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":22,"skipped":80,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:23:06.681: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 85 lines ...
      Driver emptydir doesn't support ext4 -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:153
------------------------------
S
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":21,"skipped":84,"failed":0}
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:22:40.403: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename port-forwarding
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in port-forwarding-2084
... skipping 34 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:444
    that expects a client request
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:445
      should support a client that connects, sends NO DATA, and disconnects
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:446
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":22,"skipped":84,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 255 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":25,"skipped":132,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
... skipping 102 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (ext3)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data","total":-1,"completed":21,"skipped":111,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":17,"skipped":51,"failed":0}
[BeforeEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:22:20.598: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-3367
... skipping 14 lines ...
Jan 16 09:22:36.441: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3367.svc.cluster.local from pod dns-3367/dns-test-ae8f3951-4798-4365-828d-496c6f288326: the server could not find the requested resource (get pods dns-test-ae8f3951-4798-4365-828d-496c6f288326)
Jan 16 09:22:36.822: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3367.svc.cluster.local from pod dns-3367/dns-test-ae8f3951-4798-4365-828d-496c6f288326: the server could not find the requested resource (get pods dns-test-ae8f3951-4798-4365-828d-496c6f288326)
Jan 16 09:22:39.708: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3367.svc.cluster.local from pod dns-3367/dns-test-ae8f3951-4798-4365-828d-496c6f288326: the server could not find the requested resource (get pods dns-test-ae8f3951-4798-4365-828d-496c6f288326)
Jan 16 09:22:40.334: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3367.svc.cluster.local from pod dns-3367/dns-test-ae8f3951-4798-4365-828d-496c6f288326: the server could not find the requested resource (get pods dns-test-ae8f3951-4798-4365-828d-496c6f288326)
Jan 16 09:22:40.796: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3367.svc.cluster.local from pod dns-3367/dns-test-ae8f3951-4798-4365-828d-496c6f288326: the server could not find the requested resource (get pods dns-test-ae8f3951-4798-4365-828d-496c6f288326)
Jan 16 09:22:41.300: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3367.svc.cluster.local from pod dns-3367/dns-test-ae8f3951-4798-4365-828d-496c6f288326: the server could not find the requested resource (get pods dns-test-ae8f3951-4798-4365-828d-496c6f288326)
Jan 16 09:22:42.399: INFO: Lookups using dns-3367/dns-test-ae8f3951-4798-4365-828d-496c6f288326 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3367.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3367.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3367.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3367.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3367.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3367.svc.cluster.local jessie_udp@dns-test-service-2.dns-3367.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3367.svc.cluster.local]

Jan 16 09:22:48.483: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3367.svc.cluster.local from pod dns-3367/dns-test-ae8f3951-4798-4365-828d-496c6f288326: the server could not find the requested resource (get pods dns-test-ae8f3951-4798-4365-828d-496c6f288326)
Jan 16 09:22:49.487: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3367.svc.cluster.local from pod dns-3367/dns-test-ae8f3951-4798-4365-828d-496c6f288326: the server could not find the requested resource (get pods dns-test-ae8f3951-4798-4365-828d-496c6f288326)
Jan 16 09:22:50.092: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3367.svc.cluster.local from pod dns-3367/dns-test-ae8f3951-4798-4365-828d-496c6f288326: the server could not find the requested resource (get pods dns-test-ae8f3951-4798-4365-828d-496c6f288326)
Jan 16 09:22:50.761: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3367.svc.cluster.local from pod dns-3367/dns-test-ae8f3951-4798-4365-828d-496c6f288326: the server could not find the requested resource (get pods dns-test-ae8f3951-4798-4365-828d-496c6f288326)
Jan 16 09:22:52.180: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3367.svc.cluster.local from pod dns-3367/dns-test-ae8f3951-4798-4365-828d-496c6f288326: the server could not find the requested resource (get pods dns-test-ae8f3951-4798-4365-828d-496c6f288326)
Jan 16 09:22:52.401: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3367.svc.cluster.local from pod dns-3367/dns-test-ae8f3951-4798-4365-828d-496c6f288326: the server could not find the requested resource (get pods dns-test-ae8f3951-4798-4365-828d-496c6f288326)
Jan 16 09:22:52.640: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3367.svc.cluster.local from pod dns-3367/dns-test-ae8f3951-4798-4365-828d-496c6f288326: the server could not find the requested resource (get pods dns-test-ae8f3951-4798-4365-828d-496c6f288326)
Jan 16 09:22:52.879: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3367.svc.cluster.local from pod dns-3367/dns-test-ae8f3951-4798-4365-828d-496c6f288326: the server could not find the requested resource (get pods dns-test-ae8f3951-4798-4365-828d-496c6f288326)
Jan 16 09:22:53.365: INFO: Lookups using dns-3367/dns-test-ae8f3951-4798-4365-828d-496c6f288326 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3367.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3367.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3367.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3367.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3367.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3367.svc.cluster.local jessie_udp@dns-test-service-2.dns-3367.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3367.svc.cluster.local]

Jan 16 09:22:58.093: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3367.svc.cluster.local from pod dns-3367/dns-test-ae8f3951-4798-4365-828d-496c6f288326: the server could not find the requested resource (get pods dns-test-ae8f3951-4798-4365-828d-496c6f288326)
Jan 16 09:22:59.020: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3367.svc.cluster.local from pod dns-3367/dns-test-ae8f3951-4798-4365-828d-496c6f288326: the server could not find the requested resource (get pods dns-test-ae8f3951-4798-4365-828d-496c6f288326)
Jan 16 09:22:59.800: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3367.svc.cluster.local from pod dns-3367/dns-test-ae8f3951-4798-4365-828d-496c6f288326: the server could not find the requested resource (get pods dns-test-ae8f3951-4798-4365-828d-496c6f288326)
Jan 16 09:23:00.603: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3367.svc.cluster.local from pod dns-3367/dns-test-ae8f3951-4798-4365-828d-496c6f288326: the server could not find the requested resource (get pods dns-test-ae8f3951-4798-4365-828d-496c6f288326)
Jan 16 09:23:01.761: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3367.svc.cluster.local from pod dns-3367/dns-test-ae8f3951-4798-4365-828d-496c6f288326: the server could not find the requested resource (get pods dns-test-ae8f3951-4798-4365-828d-496c6f288326)
Jan 16 09:23:02.081: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3367.svc.cluster.local from pod dns-3367/dns-test-ae8f3951-4798-4365-828d-496c6f288326: the server could not find the requested resource (get pods dns-test-ae8f3951-4798-4365-828d-496c6f288326)
Jan 16 09:23:03.525: INFO: Lookups using dns-3367/dns-test-ae8f3951-4798-4365-828d-496c6f288326 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3367.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3367.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3367.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3367.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3367.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3367.svc.cluster.local]

Jan 16 09:23:12.642: INFO: DNS probes using dns-3367/dns-test-ae8f3951-4798-4365-828d-496c6f288326 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 88 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":-1,"completed":26,"skipped":162,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:23:18.160: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 142 lines ...
STEP: cleaning the environment after gcepd
Jan 16 09:22:52.256: INFO: Deleting pod "gcepd-client" in namespace "volume-9999"
Jan 16 09:22:52.518: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
STEP: Deleting pv and pvc
Jan 16 09:23:02.872: INFO: Deleting PersistentVolumeClaim "pvc-56wb8"
Jan 16 09:23:03.346: INFO: Deleting PersistentVolume "gcepd-vhnz7"
Jan 16 09:23:05.223: INFO: error deleting PD "bootstrap-e2e-688c9ac3-82d4-406f-9e44-c5493e6c6474": googleapi: Error 400: The disk resource 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/disks/bootstrap-e2e-688c9ac3-82d4-406f-9e44-c5493e6c6474' is already being used by 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-8mzr', resourceInUseByAnotherResource
Jan 16 09:23:05.223: INFO: Couldn't delete PD "bootstrap-e2e-688c9ac3-82d4-406f-9e44-c5493e6c6474", sleeping 5s: googleapi: Error 400: The disk resource 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/disks/bootstrap-e2e-688c9ac3-82d4-406f-9e44-c5493e6c6474' is already being used by 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-8mzr', resourceInUseByAnotherResource
Jan 16 09:23:11.478: INFO: error deleting PD "bootstrap-e2e-688c9ac3-82d4-406f-9e44-c5493e6c6474": googleapi: Error 400: The disk resource 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/disks/bootstrap-e2e-688c9ac3-82d4-406f-9e44-c5493e6c6474' is already being used by 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-8mzr', resourceInUseByAnotherResource
Jan 16 09:23:11.478: INFO: Couldn't delete PD "bootstrap-e2e-688c9ac3-82d4-406f-9e44-c5493e6c6474", sleeping 5s: googleapi: Error 400: The disk resource 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/disks/bootstrap-e2e-688c9ac3-82d4-406f-9e44-c5493e6c6474' is already being used by 'projects/kubernetes-jkns-e2e-gce-serial/zones/us-west1-b/instances/bootstrap-e2e-minion-group-8mzr', resourceInUseByAnotherResource
Jan 16 09:23:18.558: INFO: Successfully deleted PD "bootstrap-e2e-688c9ac3-82d4-406f-9e44-c5493e6c6474".
Jan 16 09:23:18.558: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:23:18.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-9999" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (ext3)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data","total":-1,"completed":17,"skipped":88,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:23:19.098: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 13 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193

      Driver local doesn't support InlineVolume -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":18,"skipped":51,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:23:15.650: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-9949
... skipping 11 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:23:19.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9949" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":19,"skipped":51,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:23:20.528: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 124 lines ...
Jan 16 09:21:58.311: INFO: creating *v1.StatefulSet: csi-mock-volumes-9151/csi-mockplugin
Jan 16 09:21:58.442: INFO: creating *v1beta1.CSIDriver: csi-mock-csi-mock-volumes-9151
Jan 16 09:21:58.763: INFO: creating *v1.StatefulSet: csi-mock-volumes-9151/csi-mockplugin-attacher
Jan 16 09:21:58.951: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9151"
STEP: Creating pod
STEP: checking for CSIInlineVolumes feature
Jan 16 09:22:42.193: INFO: Error getting logs for pod csi-inline-volume-xch8c: the server rejected our request for an unknown reason (get pods csi-inline-volume-xch8c)
STEP: Deleting pod csi-inline-volume-xch8c in namespace csi-mock-volumes-9151
STEP: Deleting the previously created pod
Jan 16 09:22:47.993: INFO: Deleting pod "pvc-volume-tester-hp877" in namespace "csi-mock-volumes-9151"
Jan 16 09:22:49.127: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hp877" to be fully deleted
STEP: Checking CSI driver logs
Jan 16 09:23:02.336: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9151","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9151","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9151","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-9151","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"csi-c66d2b2c04306f0c712495549f43080bbc20481fc0e8d9b4969f451feef482cc","target_path":"/var/lib/kubelet/pods/ef9a27e1-8157-41d5-baf8-c17474fef4b6/volumes/kubernetes.io~csi/my-volume/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"true","csi.storage.k8s.io/pod.name":"pvc-volume-tester-hp877","csi.storage.k8s.io/pod.namespace":"csi-mock-volumes-9151","csi.storage.k8s.io/pod.uid":"ef9a27e1-8157-41d5-baf8-c17474fef4b6","csi.storage.k8s.io/serviceAccount.name":"default"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"csi-c66d2b2c04306f0c712495549f43080bbc20481fc0e8d9b4969f451feef482cc","volume_path":"/var/lib/kubelet/pods/ef9a27e1-8157-41d5-baf8-c17474fef4b6/volumes/kubernetes.io~csi/my-volume/mount"},"Response":null,"Error":"rpc error: code = NotFound desc = csi-c66d2b2c04306f0c712495549f43080bbc20481fc0e8d9b4969f451feef482cc"}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-c66d2b2c04306f0c712495549f43080bbc20481fc0e8d9b4969f451feef482cc","target_path":"/var/lib/kubelet/pods/ef9a27e1-8157-41d5-baf8-c17474fef4b6/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":""}

Jan 16 09:23:02.336: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Jan 16 09:23:02.336: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-hp877
Jan 16 09:23:02.336: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-9151
Jan 16 09:23:02.336: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: ef9a27e1-8157-41d5-baf8-c17474fef4b6
Jan 16 09:23:02.336: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true
... skipping 38 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:296
    contain ephemeral=true when using inline volume
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:346
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":10,"skipped":53,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory"]}

SS
------------------------------
[BeforeEach] [sig-scheduling] LimitRange
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 38 lines ...
• [SLOW TEST:16.199 seconds]
[sig-scheduling] LimitRange
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  should create a LimitRange with defaults and ensure pod has those defaults applied.
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/limit_range.go:55
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied.","total":-1,"completed":19,"skipped":64,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:23:22.362: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 150 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:23:22.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-6122" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls","total":-1,"completed":18,"skipped":90,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:23:22.856: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 242 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":25,"skipped":133,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 49 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":26,"skipped":163,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:23:28.196: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:23:28.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 59 lines ...
• [SLOW TEST:13.092 seconds]
[sig-storage] PV Protection
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that PV bound to a PVC is not removed immediately
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:105
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":27,"skipped":175,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:23:31.271: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 81 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":23,"skipped":125,"failed":0}
[BeforeEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:22:21.033: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-6016
... skipping 30 lines ...
• [SLOW TEST:71.983 seconds]
[sig-api-machinery] Watchers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":24,"skipped":125,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:23:33.019: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:23:33.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 12 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":15,"skipped":93,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:23:08.836: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-1283
... skipping 30 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1054
    should create/apply a CR with unknown fields for CRD with no validation schema
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1055
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema","total":-1,"completed":16,"skipped":93,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 91 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI online volume expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:530
    should expand volume without restarting pod if attach=on, nodeExpansion=on
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:545
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":17,"skipped":85,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:23:33.933: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:23:33.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 112 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:121
    with Single PV - PVC pairs
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:154
      create a PV and a pre-bound PVC: test write access
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:195
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access","total":-1,"completed":29,"skipped":147,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:23:37.723: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 38 lines ...
• [SLOW TEST:5.727 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":17,"skipped":95,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:23:38.994: INFO: Driver gluster doesn't support ext4 -- skipping
... skipping 118 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:466
    that expects a client request
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:467
      should support a client that connects, sends NO DATA, and disconnects
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:468
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":26,"skipped":112,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 67 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:525
    should support exec using resource/name
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:577
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":11,"skipped":55,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:23:49.641: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:23:49.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 84 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:525
    should contain last line of the log
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:737
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should contain last line of the log","total":-1,"completed":25,"skipped":156,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:23:49.994: INFO: Only supported for providers [aws] (not gce)
... skipping 172 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should create and stop a replication controller  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":20,"skipped":58,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:23:51.360: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:23:51.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 92 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      Verify if offline PVC expansion works
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:162
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":19,"skipped":142,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:23:53.139: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 130 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run CronJob
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1822
    should create a CronJob
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1835
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run CronJob should create a CronJob","total":-1,"completed":26,"skipped":165,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:37.651 seconds]
[k8s.io] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a local redirect http liveness probe
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:232
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":20,"skipped":80,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:24:00.055: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 178 lines ...
• [SLOW TEST:27.109 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":18,"skipped":89,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:24:01.052: INFO: Driver azure-disk doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:24:01.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 36 lines ...
• [SLOW TEST:5.039 seconds]
[sig-instrumentation] MetricsGrabber
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/common/framework.go:23
  should grab all metrics from a Kubelet.
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/monitoring/metrics_grabber.go:53
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":25,"skipped":133,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:24:01.433: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:24:01.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 354 lines ...
Jan 16 09:23:31.609: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: cleaning the environment after flex
Jan 16 09:23:33.869: INFO: Deleting pod "flex-client" in namespace "flexvolume-9668"
Jan 16 09:23:34.643: INFO: Wait up to 5m0s for pod "flex-client" to be fully deleted
STEP: waiting for flex client pod to terminate
Jan 16 09:23:51.458: INFO: Waiting up to 5m0s for pod "flex-client" in namespace "flexvolume-9668" to be "terminated due to deadline exceeded"
Jan 16 09:23:51.783: INFO: Pod "flex-client" in namespace "flexvolume-9668" not found. Error: pods "flex-client" not found
STEP: uninstalling flexvolume dummy-attachable-flexvolume-9668 from node bootstrap-e2e-minion-group-451g
Jan 16 09:24:01.783: INFO: Getting external IP address for bootstrap-e2e-minion-group-451g
Jan 16 09:24:02.403: INFO: ssh prow@35.247.44.158:22: command:   sudo rm -r /home/kubernetes/flexvolume/k8s~dummy-attachable-flexvolume-9668
Jan 16 09:24:02.403: INFO: ssh prow@35.247.44.158:22: stdout:    ""
Jan 16 09:24:02.403: INFO: ssh prow@35.247.44.158:22: stderr:    ""
Jan 16 09:24:02.403: INFO: ssh prow@35.247.44.158:22: exit code: 0
... skipping 11 lines ...
• [SLOW TEST:52.713 seconds]
[sig-storage] Flexvolumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should be mountable when attachable
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:205
------------------------------
{"msg":"PASSED [sig-storage] Flexvolumes should be mountable when attachable","total":-1,"completed":26,"skipped":133,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:24:03.728: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 85 lines ...
• [SLOW TEST:88.729 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":13,"skipped":76,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:24:05.018: INFO: Driver nfs doesn't support ext3 -- skipping
... skipping 120 lines ...
• [SLOW TEST:14.480 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":61,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:5.847 seconds]
[sig-api-machinery] Servers with support for Table transformation
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should return pod details
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:51
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return pod details","total":-1,"completed":19,"skipped":92,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 58 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":21,"skipped":143,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:24:12.736: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:24:12.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 97 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":56,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:24:13.599: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 50 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Only supported for providers [aws] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1645
------------------------------
... skipping 65 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":22,"skipped":114,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:24:13.853: INFO: Distro gci doesn't support ntfs -- skipping
... skipping 48 lines ...
• [SLOW TEST:14.989 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":136,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 42 lines ...
• [SLOW TEST:13.496 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":27,"skipped":144,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:24:17.244: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 15 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":19,"skipped":133,"failed":0}
[BeforeEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:23:26.738: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename services
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-1483
... skipping 31 lines ...
• [SLOW TEST:53.443 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":20,"skipped":133,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:24:20.184: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 61 lines ...
Jan 16 09:24:24.374: INFO: AfterEach: Cleaning up test resources


S [SKIPPING] in Spec Setup (BeforeEach) [4.184 seconds]
[sig-storage] PersistentVolumes:vsphere
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on vsphere volume detach [BeforeEach]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:147

  Only supported for providers [vsphere] (not gce)

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:63
------------------------------
... skipping 69 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":26,"skipped":176,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:24:25.533: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:24:25.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 42 lines ...
• [SLOW TEST:23.534 seconds]
[k8s.io] [sig-node] Events
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":14,"skipped":78,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Lease
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 49 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run default
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1592
    should create an rc or deployment from an image  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":-1,"completed":15,"skipped":79,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:24:36.528: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 87 lines ...
• [SLOW TEST:20.330 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":28,"skipped":149,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 91 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":19,"skipped":158,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:24:41.096: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 53 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with privileged
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:226
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":145,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 24 lines ...
• [SLOW TEST:11.008 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":163,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:24:52.118: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 249 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":28,"skipped":181,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:25:00.013: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 13 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:162

      Driver hostPath doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":-1,"completed":27,"skipped":177,"failed":0}
[BeforeEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:24:31.083: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-325
... skipping 24 lines ...
• [SLOW TEST:29.836 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":28,"skipped":177,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:25:00.924: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 45 lines ...
• [SLOW TEST:11.479 seconds]
[sig-node] Downward API
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":186,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:25:03.626: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 32 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:25:03.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-627" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.","total":-1,"completed":29,"skipped":184,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:25:04.050: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:25:04.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 104 lines ...
      Driver csi-hostpath doesn't support ntfs -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:153
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":21,"skipped":118,"failed":0}
[BeforeEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:20:56.435: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-9922
... skipping 16 lines ...
• [SLOW TEST:251.546 seconds]
[k8s.io] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:167
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance]","total":-1,"completed":22,"skipped":118,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 67 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume at the same time
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:243
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":27,"skipped":178,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 19 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl create quota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2141
    should reject quota with invalid scopes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2199
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes","total":-1,"completed":23,"skipped":119,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 74 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:121
    with Single PV - PVC pairs
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:154
      create a PVC and non-pre-bound PV: test write access
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:177
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","total":-1,"completed":20,"skipped":93,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage]","total":-1,"completed":19,"skipped":104,"failed":0}
[BeforeEach] [sig-apps] CronJob
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 09:23:43.667: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename cronjob
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-7784
... skipping 17 lines ...
• [SLOW TEST:97.672 seconds]
[sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete successful finished jobs with limit of one successful job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:234
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete successful finished jobs with limit of one successful job","total":-1,"completed":20,"skipped":104,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:25:21.344: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 252 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":21,"skipped":89,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:25:21.607: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 09:25:21.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 189 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (block volmode)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":23,"skipped":191,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:25:22.674: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 85 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should resize volume when PVC is edited while pod is using it
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:220
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":22,"skipped":62,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 73 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      Verify if offline PVC expansion works
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:162
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":18,"skipped":99,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 09:25:29.379: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/