This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 67 succeeded
Started2022-08-07 16:17
Elapsed17m13s
Revisionmaster

No Test Failures!


Show 67 Passed Tests

Show 11 Skipped Tests

Error lines from build-log.txt

... skipping 49 lines ...
non alpha feature gates for latest Kubernetes: CSI_PROW_E2E_GATES_LATEST=
non alpha E2E feature gates: CSI_PROW_E2E_GATES=
external-snapshotter version tag: CSI_SNAPSHOTTER_VERSION=master
tests that need to be skipped: CSI_PROW_E2E_SKIP=Disruptive
work directory: CSI_PROW_WORK=/home/prow/go/pkg/csiprow.RqsVrLN1c4
artifacts: ARTIFACTS=/logs/artifacts
Sun Aug  7 16:17:14 UTC 2022 go1.19 $ curl --fail --location -o /home/prow/go/pkg/csiprow.RqsVrLN1c4/bin/kind https://github.com/kubernetes-sigs/kind/releases/download/v0.11.1/kind-linux-amd64
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0

100 6660k  100 6660k    0     0  24.3M      0 --:--:-- --:--:-- --:--:-- 24.3M
No kind clusters found.
INFO: kind-config.yaml:
... skipping 169 lines ...
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at 306d58d Merge pull request #383 from pohly/changelog-5.0.0
Sun Aug  7 16:19:16 UTC 2022 go1.19 /home/prow/go/src/github.com/kubernetes-csi/csi-test$ git clean -fdx
Sun Aug  7 16:19:16 UTC 2022 go1.19 /home/prow/go/src/github.com/kubernetes-csi/csi-test/cmd/csi-sanity$ curl --fail --location https://dl.google.com/go/go1.18.linux-amd64.tar.gz
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
 17  135M   17 24.2M    0     0  29.0M      0  0:00:04 --:--:--  0:00:04 29.0M
 36  135M   36 49.2M    0     0  26.2M      0  0:00:05  0:00:01  0:00:04 26.2M
 58  135M   58 79.0M    0     0  27.9M      0  0:00:04  0:00:02  0:00:02 27.8M
 89  135M   89  121M    0     0  30.9M      0  0:00:04  0:00:03  0:00:01 30.9M
100  135M  100  135M    0     0  28.3M      0  0:00:04  0:00:04 --:--:-- 28.3M
Sun Aug  7 16:19:21 UTC 2022 go1.18 /home/prow/go/src/github.com/kubernetes-csi/csi-test/cmd/csi-sanity$ go build -o /home/prow/go/pkg/csiprow.RqsVrLN1c4/csi-sanity
Sun Aug  7 16:19:34 UTC 2022 go1.19 $ /home/prow/go/pkg/csiprow.RqsVrLN1c4/csi-sanity -ginkgo.v -csi.junitfile /logs/artifacts/junit_sanity.xml -csi.endpoint dns:///172.18.0.2:32315 -csi.stagingdir /tmp/staging -csi.mountdir /tmp/mount -csi.createstagingpathcmd /home/prow/go/pkg/csiprow.RqsVrLN1c4/mkdir_in_pod.sh -csi.createmountpathcmd /home/prow/go/pkg/csiprow.RqsVrLN1c4/mkdir_in_pod.sh -csi.removestagingpathcmd /home/prow/go/pkg/csiprow.RqsVrLN1c4/rmdir_in_pod.sh -csi.removemountpathcmd /home/prow/go/pkg/csiprow.RqsVrLN1c4/rmdir_in_pod.sh -csi.checkpathcmd /home/prow/go/pkg/csiprow.RqsVrLN1c4/checkdir_in_pod.sh
Running Suite: CSI Driver Test Suite - /home/prow/go/src/github.com/kubernetes-csi/csi-driver-host-path
... skipping 59 lines ...
  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:36.306
    STEP: creating mount and staging directories 08/07/22 16:19:36.306
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] ListVolumes
  should fail when an invalid starting_token is passed
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:194
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:37.064
STEP: creating mount and staging directories 08/07/22 16:19:37.065
------------------------------
• [0.704 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ListVolumes
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:175
    should fail when an invalid starting_token is passed
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:194

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:37.064
    STEP: creating mount and staging directories 08/07/22 16:19:37.065
  << End Captured GinkgoWriter Output
... skipping 23 lines ...
------------------------------
P [PENDING]
Controller Service [Controller Server] ListVolumes pagination should detect volumes added between pages and accept tokens when the last volume from a page is deleted
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:268
------------------------------
Controller Service [Controller Server] CreateVolume
  should fail when no name is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:376
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:38.485
STEP: creating mount and staging directories 08/07/22 16:19:38.486
------------------------------
• [0.759 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  CreateVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:369
    should fail when no name is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:376

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:38.485
    STEP: creating mount and staging directories 08/07/22 16:19:38.486
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] CreateVolume
  should fail when no volume capabilities are provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:391
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:39.245
STEP: creating mount and staging directories 08/07/22 16:19:39.245
------------------------------
• [0.725 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  CreateVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:369
    should fail when no volume capabilities are provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:391

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:39.245
    STEP: creating mount and staging directories 08/07/22 16:19:39.245
  << End Captured GinkgoWriter Output
... skipping 38 lines ...
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:40.724
    STEP: creating mount and staging directories 08/07/22 16:19:40.724
    STEP: creating a volume 08/07/22 16:19:41.094
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] CreateVolume
  should not fail when requesting to create a volume with already existing name and same capacity
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:460
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:41.492
STEP: creating mount and staging directories 08/07/22 16:19:41.492
STEP: creating a volume 08/07/22 16:19:41.868
------------------------------
• [0.763 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  CreateVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:369
    should not fail when requesting to create a volume with already existing name and same capacity
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:460

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:41.492
    STEP: creating mount and staging directories 08/07/22 16:19:41.492
    STEP: creating a volume 08/07/22 16:19:41.868
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] CreateVolume
  should fail when requesting to create a volume with already existing name and different capacity
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:501
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:42.255
STEP: creating mount and staging directories 08/07/22 16:19:42.255
STEP: creating a volume 08/07/22 16:19:42.654
------------------------------
• [0.766 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  CreateVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:369
    should fail when requesting to create a volume with already existing name and different capacity
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:501

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:42.255
    STEP: creating mount and staging directories 08/07/22 16:19:42.255
    STEP: creating a volume 08/07/22 16:19:42.654
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] CreateVolume
  should not fail when creating volume with maximum-length name
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:545
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:43.022
STEP: creating mount and staging directories 08/07/22 16:19:43.022
STEP: creating a volume 08/07/22 16:19:43.402
------------------------------
• [0.772 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  CreateVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:369
    should not fail when creating volume with maximum-length name
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:545

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:43.022
    STEP: creating mount and staging directories 08/07/22 16:19:43.022
    STEP: creating a volume 08/07/22 16:19:43.402
... skipping 21 lines ...
    STEP: creating mount and staging directories 08/07/22 16:19:43.795
    STEP: creating a snapshot 08/07/22 16:19:44.195
    STEP: creating a volume from source snapshot 08/07/22 16:19:44.204
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] CreateVolume
  should fail when the volume source snapshot is not found
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:595
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:44.629
STEP: creating mount and staging directories 08/07/22 16:19:44.629
STEP: creating a volume from source snapshot 08/07/22 16:19:45.025
------------------------------
• [0.782 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  CreateVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:369
    should fail when the volume source snapshot is not found
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:595

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:44.629
    STEP: creating mount and staging directories 08/07/22 16:19:44.629
    STEP: creating a volume from source snapshot 08/07/22 16:19:45.025
... skipping 20 lines ...
    STEP: creating mount and staging directories 08/07/22 16:19:45.412
    STEP: creating a volume 08/07/22 16:19:45.819
    STEP: creating a volume from source volume 08/07/22 16:19:45.82
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] CreateVolume
  should fail when the volume source volume is not found
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:641
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:46.247
STEP: creating mount and staging directories 08/07/22 16:19:46.247
STEP: creating a volume from source snapshot 08/07/22 16:19:46.604
------------------------------
• [0.713 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  CreateVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:369
    should fail when the volume source volume is not found
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:641

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:46.247
    STEP: creating mount and staging directories 08/07/22 16:19:46.247
    STEP: creating a volume from source snapshot 08/07/22 16:19:46.604
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] DeleteVolume
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:671
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:46.96
STEP: creating mount and staging directories 08/07/22 16:19:46.961
------------------------------
• [0.731 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  DeleteVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:664
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:671

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:46.96
    STEP: creating mount and staging directories 08/07/22 16:19:46.961
  << End Captured GinkgoWriter Output
... skipping 38 lines ...
    STEP: creating mount and staging directories 08/07/22 16:19:48.396
    STEP: creating a volume 08/07/22 16:19:48.753
    STEP: deleting a volume 08/07/22 16:19:48.756
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] ValidateVolumeCapabilities
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:734
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:49.139
STEP: creating mount and staging directories 08/07/22 16:19:49.14
------------------------------
• [0.758 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ValidateVolumeCapabilities
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:733
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:734

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:49.139
    STEP: creating mount and staging directories 08/07/22 16:19:49.14
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] ValidateVolumeCapabilities
  should fail when no volume capabilities are provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:748
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:49.897
STEP: creating mount and staging directories 08/07/22 16:19:49.897
STEP: creating a single node writer volume 08/07/22 16:19:50.292
------------------------------
• [0.795 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ValidateVolumeCapabilities
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:733
    should fail when no volume capabilities are provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:748

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:49.897
    STEP: creating mount and staging directories 08/07/22 16:19:49.897
    STEP: creating a single node writer volume 08/07/22 16:19:50.292
... skipping 20 lines ...
    STEP: creating mount and staging directories 08/07/22 16:19:50.692
    STEP: creating a single node writer volume 08/07/22 16:19:51.07
    STEP: validating volume capabilities 08/07/22 16:19:51.072
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] ValidateVolumeCapabilities
  should fail when the requested volume does not exist
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:825
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:51.498
STEP: creating mount and staging directories 08/07/22 16:19:51.498
------------------------------
• [1.341 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ValidateVolumeCapabilities
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:733
    should fail when the requested volume does not exist
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:825

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:51.498
    STEP: creating mount and staging directories 08/07/22 16:19:51.498
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] ControllerPublishVolume
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:852
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:52.839
STEP: creating mount and staging directories 08/07/22 16:19:52.839
------------------------------
S [SKIPPED] [1.115 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ControllerPublishVolume [BeforeEach]
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:846
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:852

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:52.839
    STEP: creating mount and staging directories 08/07/22 16:19:52.839
  << End Captured GinkgoWriter Output

  ControllerPublishVolume not supported
  In [BeforeEach] at: /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:848
------------------------------
Controller Service [Controller Server] ControllerPublishVolume
  should fail when no node id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:867
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:53.954
STEP: creating mount and staging directories 08/07/22 16:19:53.955
------------------------------
S [SKIPPED] [0.793 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ControllerPublishVolume [BeforeEach]
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:846
    should fail when no node id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:867

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:53.954
    STEP: creating mount and staging directories 08/07/22 16:19:53.955
  << End Captured GinkgoWriter Output

  ControllerPublishVolume not supported
  In [BeforeEach] at: /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:848
------------------------------
Controller Service [Controller Server] ControllerPublishVolume
  should fail when no volume capability is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:883
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:54.748
STEP: creating mount and staging directories 08/07/22 16:19:54.748
------------------------------
S [SKIPPED] [0.728 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ControllerPublishVolume [BeforeEach]
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:846
    should fail when no volume capability is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:883

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:54.748
    STEP: creating mount and staging directories 08/07/22 16:19:54.748
  << End Captured GinkgoWriter Output

  ControllerPublishVolume not supported
  In [BeforeEach] at: /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:848
------------------------------
Controller Service [Controller Server] ControllerPublishVolume
  should fail when publishing more volumes than the node max attach limit
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:900
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:55.476
STEP: creating mount and staging directories 08/07/22 16:19:55.476
------------------------------
S [SKIPPED] [0.756 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ControllerPublishVolume [BeforeEach]
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:846
    should fail when publishing more volumes than the node max attach limit
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:900

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:55.476
    STEP: creating mount and staging directories 08/07/22 16:19:55.476
  << End Captured GinkgoWriter Output

  ControllerPublishVolume not supported
  In [BeforeEach] at: /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:848
------------------------------
Controller Service [Controller Server] ControllerPublishVolume
  should fail when the volume does not exist
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:940
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:56.234
STEP: creating mount and staging directories 08/07/22 16:19:56.235
------------------------------
S [SKIPPED] [0.761 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ControllerPublishVolume [BeforeEach]
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:846
    should fail when the volume does not exist
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:940

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:56.234
    STEP: creating mount and staging directories 08/07/22 16:19:56.235
  << End Captured GinkgoWriter Output

  ControllerPublishVolume not supported
  In [BeforeEach] at: /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:848
------------------------------
Controller Service [Controller Server] ControllerPublishVolume
  should fail when the node does not exist
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:962
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:56.995
STEP: creating mount and staging directories 08/07/22 16:19:56.995
------------------------------
S [SKIPPED] [0.750 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ControllerPublishVolume [BeforeEach]
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:846
    should fail when the node does not exist
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:962

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:56.995
    STEP: creating mount and staging directories 08/07/22 16:19:56.995
  << End Captured GinkgoWriter Output

  ControllerPublishVolume not supported
  In [BeforeEach] at: /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:848
------------------------------
Controller Service [Controller Server] ControllerPublishVolume
  should fail when the volume is already published but is incompatible
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1001
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:57.745
STEP: creating mount and staging directories 08/07/22 16:19:57.745
------------------------------
S [SKIPPED] [0.729 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ControllerPublishVolume [BeforeEach]
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:846
    should fail when the volume is already published but is incompatible
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1001

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:57.745
    STEP: creating mount and staging directories 08/07/22 16:19:57.745
  << End Captured GinkgoWriter Output
... skipping 43 lines ...
  << End Captured GinkgoWriter Output

  Controller Publish, UnpublishVolume not supported
  In [BeforeEach] at: /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1059
------------------------------
Controller Service [Controller Server] ControllerUnpublishVolume
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1079
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:59.972
STEP: creating mount and staging directories 08/07/22 16:19:59.973
------------------------------
S [SKIPPED] [0.793 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ControllerUnpublishVolume [BeforeEach]
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1073
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1079

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:19:59.972
    STEP: creating mount and staging directories 08/07/22 16:19:59.973
  << End Captured GinkgoWriter Output
... skipping 62 lines ...
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:02.275
    STEP: creating mount and staging directories 08/07/22 16:20:02.276
    STEP: verifying name size and characters 08/07/22 16:20:02.67
  << End Captured GinkgoWriter Output
------------------------------
ExpandVolume [Controller Server]
  should fail if no volume id is given
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1528
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:03.016
STEP: creating mount and staging directories 08/07/22 16:20:03.016
------------------------------
• [0.777 seconds]
ExpandVolume [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  should fail if no volume id is given
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1528

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:03.016
    STEP: creating mount and staging directories 08/07/22 16:20:03.016
  << End Captured GinkgoWriter Output
------------------------------
ExpandVolume [Controller Server]
  should fail if no capacity range is given
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1545
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:03.794
STEP: creating mount and staging directories 08/07/22 16:20:03.794
------------------------------
• [0.782 seconds]
ExpandVolume [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  should fail if no capacity range is given
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1545

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:03.794
    STEP: creating mount and staging directories 08/07/22 16:20:03.794
  << End Captured GinkgoWriter Output
... skipping 171 lines ...
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:09.997
    STEP: creating mount and staging directories 08/07/22 16:20:09.998
    STEP: creating required new volumes 08/07/22 16:20:10.353
  << End Captured GinkgoWriter Output
------------------------------
DeleteSnapshot [Controller Server]
  should fail when no snapshot id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1366
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:10.967
STEP: creating mount and staging directories 08/07/22 16:20:10.967
------------------------------
• [0.731 seconds]
DeleteSnapshot [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  should fail when no snapshot id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1366

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:10.967
    STEP: creating mount and staging directories 08/07/22 16:20:10.967
  << End Captured GinkgoWriter Output
... skipping 75 lines ...
  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:13.894
    STEP: creating mount and staging directories 08/07/22 16:20:13.894
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodePublishVolume
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:379
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:14.64
STEP: creating mount and staging directories 08/07/22 16:20:14.64
------------------------------
• [0.762 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodePublishVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:378
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:379

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:14.64
    STEP: creating mount and staging directories 08/07/22 16:20:14.64
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodePublishVolume
  should fail when no target path is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:393
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:15.402
STEP: creating mount and staging directories 08/07/22 16:20:15.402
------------------------------
• [0.714 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodePublishVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:378
    should fail when no target path is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:393

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:15.402
    STEP: creating mount and staging directories 08/07/22 16:20:15.402
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodePublishVolume
  should fail when no volume capability is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:408
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:16.116
STEP: creating mount and staging directories 08/07/22 16:20:16.116
------------------------------
• [0.754 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodePublishVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:378
    should fail when no volume capability is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:408

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:16.116
    STEP: creating mount and staging directories 08/07/22 16:20:16.116
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeUnpublishVolume
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:427
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:16.871
STEP: creating mount and staging directories 08/07/22 16:20:16.871
------------------------------
• [0.829 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeUnpublishVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:426
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:427

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:16.871
    STEP: creating mount and staging directories 08/07/22 16:20:16.871
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeUnpublishVolume
  should fail when no target path is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:439
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:17.7
STEP: creating mount and staging directories 08/07/22 16:20:17.7
------------------------------
• [0.803 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeUnpublishVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:426
    should fail when no target path is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:439

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:17.7
    STEP: creating mount and staging directories 08/07/22 16:20:17.7
  << End Captured GinkgoWriter Output
... skipping 31 lines ...
    STEP: Checking the target path exists 08/07/22 16:20:18.907
    STEP: Unpublishing the volume 08/07/22 16:20:19.11
    STEP: Checking the target path was removed 08/07/22 16:20:19.113
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeStageVolume
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:525
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:19.672
STEP: creating mount and staging directories 08/07/22 16:20:19.672
------------------------------
• [0.790 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeStageVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:512
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:525

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:19.672
    STEP: creating mount and staging directories 08/07/22 16:20:19.672
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeStageVolume
  should fail when no staging target path is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:544
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:20.462
STEP: creating mount and staging directories 08/07/22 16:20:20.462
------------------------------
• [0.815 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeStageVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:512
    should fail when no staging target path is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:544

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:20.462
    STEP: creating mount and staging directories 08/07/22 16:20:20.462
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeStageVolume
  should fail when no volume capability is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:563
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:21.277
STEP: creating mount and staging directories 08/07/22 16:20:21.277
STEP: creating a single node writer volume 08/07/22 16:20:21.705
------------------------------
• [0.819 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeStageVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:512
    should fail when no volume capability is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:563

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:21.277
    STEP: creating mount and staging directories 08/07/22 16:20:21.277
    STEP: creating a single node writer volume 08/07/22 16:20:21.705
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeUnstageVolume
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:614
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:22.096
STEP: creating mount and staging directories 08/07/22 16:20:22.096
------------------------------
• [0.759 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeUnstageVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:607
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:614

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:22.096
    STEP: creating mount and staging directories 08/07/22 16:20:22.096
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeUnstageVolume
  should fail when no staging target path is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:628
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:22.856
STEP: creating mount and staging directories 08/07/22 16:20:22.856
------------------------------
• [0.737 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeUnstageVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:607
    should fail when no staging target path is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:628

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:22.856
    STEP: creating mount and staging directories 08/07/22 16:20:22.856
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeGetVolumeStats
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:650
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:23.593
STEP: creating mount and staging directories 08/07/22 16:20:23.593
------------------------------
• [0.794 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeGetVolumeStats
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:643
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:650

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:23.593
    STEP: creating mount and staging directories 08/07/22 16:20:23.593
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeGetVolumeStats
  should fail when no volume path is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:664
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:24.388
STEP: creating mount and staging directories 08/07/22 16:20:24.388
------------------------------
• [0.739 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeGetVolumeStats
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:643
    should fail when no volume path is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:664

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:24.388
    STEP: creating mount and staging directories 08/07/22 16:20:24.388
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeGetVolumeStats
  should fail when volume is not found
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:678
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:25.128
STEP: creating mount and staging directories 08/07/22 16:20:25.128
------------------------------
• [0.862 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeGetVolumeStats
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:643
    should fail when volume is not found
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:678

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:25.128
    STEP: creating mount and staging directories 08/07/22 16:20:25.128
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeGetVolumeStats
  should fail when volume does not exist on the specified path
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:693
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:25.99
STEP: creating mount and staging directories 08/07/22 16:20:25.99
STEP: creating a single node writer volume for expansion 08/07/22 16:20:26.351
STEP: getting a node id 08/07/22 16:20:26.353
STEP: node staging volume 08/07/22 16:20:26.354
... skipping 2 lines ...
------------------------------
• [0.738 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeGetVolumeStats
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:643
    should fail when volume does not exist on the specified path
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:693

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:25.99
    STEP: creating mount and staging directories 08/07/22 16:20:25.99
    STEP: creating a single node writer volume for expansion 08/07/22 16:20:26.351
    STEP: getting a node id 08/07/22 16:20:26.353
    STEP: node staging volume 08/07/22 16:20:26.354
    STEP: publishing the volume on a node 08/07/22 16:20:26.355
    STEP: Get node volume stats 08/07/22 16:20:26.36
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeExpandVolume
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:740
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:26.728
STEP: creating mount and staging directories 08/07/22 16:20:26.728
------------------------------
• [0.726 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeExpandVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:732
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:740

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:26.728
    STEP: creating mount and staging directories 08/07/22 16:20:26.728
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeExpandVolume
  should fail when no volume path is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:755
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:27.454
STEP: creating mount and staging directories 08/07/22 16:20:27.454
STEP: creating a single node writer volume for expansion 08/07/22 16:20:27.839
------------------------------
• [0.761 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeExpandVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:732
    should fail when no volume path is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:755

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:27.454
    STEP: creating mount and staging directories 08/07/22 16:20:27.454
    STEP: creating a single node writer volume for expansion 08/07/22 16:20:27.839
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeExpandVolume
  should fail when volume is not found
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:774
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:28.215
STEP: creating mount and staging directories 08/07/22 16:20:28.215
------------------------------
• [0.768 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeExpandVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:732
    should fail when volume is not found
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:774

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:28.215
    STEP: creating mount and staging directories 08/07/22 16:20:28.215
  << End Captured GinkgoWriter Output
... skipping 121 lines ...
    STEP: publishing the volume on a node 08/07/22 16:20:31.097
    STEP: publishing the volume on a node 08/07/22 16:20:31.098
    STEP: Get node volume stats 08/07/22 16:20:31.099
  << End Captured GinkgoWriter Output
------------------------------
CreateSnapshot [Controller Server]
  should fail when no name is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1422
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:31.494
STEP: creating mount and staging directories 08/07/22 16:20:31.494
------------------------------
• [0.714 seconds]
CreateSnapshot [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  should fail when no name is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1422

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:31.494
    STEP: creating mount and staging directories 08/07/22 16:20:31.494
  << End Captured GinkgoWriter Output
------------------------------
CreateSnapshot [Controller Server]
  should fail when no source volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1439
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:32.209
STEP: creating mount and staging directories 08/07/22 16:20:32.209
------------------------------
• [0.709 seconds]
CreateSnapshot [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  should fail when no source volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1439

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:32.209
    STEP: creating mount and staging directories 08/07/22 16:20:32.209
  << End Captured GinkgoWriter Output
... skipping 21 lines ...
    STEP: creating a volume 08/07/22 16:20:33.274
    STEP: creating a snapshot 08/07/22 16:20:33.276
    STEP: creating a snapshot with the same name and source volume ID 08/07/22 16:20:33.28
  << End Captured GinkgoWriter Output
------------------------------
CreateSnapshot [Controller Server]
  should fail when requesting to create a snapshot with already existing name and different source volume ID
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1470
STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:33.694
STEP: creating mount and staging directories 08/07/22 16:20:33.695
STEP: creating a snapshot 08/07/22 16:20:34.099
STEP: creating a new source volume 08/07/22 16:20:34.105
STEP: creating a snapshot with the same name but different source volume ID 08/07/22 16:20:34.107
I0807 16:20:34.153398   11880 resources.go:320] deleting snapshot ID dd2814f1-166c-11ed-8b17-d2a0eeff5d77
------------------------------
• [0.839 seconds]
CreateSnapshot [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  should fail when requesting to create a snapshot with already existing name and different source volume ID
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1470

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.2:32315 08/07/22 16:20:33.694
    STEP: creating mount and staging directories 08/07/22 16:20:33.695
    STEP: creating a snapshot 08/07/22 16:20:34.099
... skipping 30 lines ...
[ReportAfterSuite] PASSED [0.003 seconds]
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
------------------------------

Ran 67 of 78 Specs in 60.651 seconds
SUCCESS! -- 67 Passed | 0 Failed | 1 Pending | 10 Skipped
Sun Aug  7 16:20:35 UTC 2022 go1.19 $ git init /home/prow/go/src/k8s.io/kubernetes
Initialized empty Git repository in /home/prow/go/src/k8s.io/kubernetes/.git/
Sun Aug  7 16:20:35 UTC 2022 go1.19 /home/prow/go/src/k8s.io/kubernetes$ git fetch --depth=1 https://github.com/kubernetes/kubernetes v1.21.0
From https://github.com/kubernetes/kubernetes
 * tag                 v1.21.0    -> FETCH_HEAD
Sun Aug  7 16:20:47 UTC 2022 go1.19 /home/prow/go/src/k8s.io/kubernetes$ git checkout FETCH_HEAD
... skipping 11 lines ...
HEAD is now at cb303e61 Release commit for Kubernetes v1.21.0
Sun Aug  7 16:20:49 UTC 2022 go1.19 /home/prow/go/src/k8s.io/kubernetes$ git clean -fdx

Using a modified version of k/k/test/e2e:


Sun Aug  7 16:20:50 UTC 2022 go1.19 $ curl --fail --location https://dl.google.com/go/go1.16.linux-amd64.tar.gz
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  9  123M    9 11.8M    0     0  28.0M      0  0:00:04 --:--:--  0:00:04 28.0M
 33  123M   33 41.6M    0     0  29.1M      0  0:00:04  0:00:01  0:00:03 29.1M
 60  123M   60 74.4M    0     0  31.0M      0  0:00:03  0:00:02  0:00:01 31.0M
 89  123M   89  110M    0     0  32.2M      0  0:00:03  0:00:03 --:--:-- 32.2M
100  123M  100  123M    0     0  29.4M      0  0:00:04  0:00:04 --:--:-- 29.4M
Sun Aug  7 16:20:54 UTC 2022 go1.16 $ make WHAT=test/e2e/e2e.test -C/home/prow/go/src/k8s.io/kubernetes
make: Entering directory '/home/prow/go/src/k8s.io/kubernetes'
make[1]: Entering directory '/home/prow/go/src/k8s.io/kubernetes'
... skipping 119 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278

    Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 238 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Inline-volume (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278

    Driver "hostpath.csi.k8s.io" does not support volume type "InlineVolume" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 69 lines ...
STEP: Creating a kubernetes client
Aug  7 16:26:31.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
W0807 16:26:31.973972   64658 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Aug  7 16:26:31.974: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Aug  7 16:26:31.989: INFO: Driver didn't provide topology keys -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  7 16:26:31.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "topology-3590" for this suite.


S [SKIPPING] [0.257 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (immediate binding)] topology
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

    Driver didn't provide topology keys -- skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:124
------------------------------
... skipping 50 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267

    Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 8 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Inline-volume (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267

    Driver "hostpath.csi.k8s.io" does not support volume type "InlineVolume" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 83 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240

    Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 8 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256

    Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 197 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Inline-volume (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240

    Driver "hostpath.csi.k8s.io" does not support volume type "InlineVolume" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 481 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to use a volume in a pod with mismatched mode [Slow] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296

    Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 175 lines ...
STEP: Creating a kubernetes client
Aug  7 16:26:32.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
W0807 16:26:34.995153   64783 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Aug  7 16:26:34.995: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Aug  7 16:26:34.998: INFO: Driver didn't provide topology keys -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  7 16:26:34.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "topology-7552" for this suite.


S [SKIPPING] [2.280 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (delayed binding)] topology
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

    Driver didn't provide topology keys -- skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:124
------------------------------
... skipping 364 lines ...
STEP: creating a claim
Aug  7 16:26:33.960: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io4dgcr] to have phase Bound
Aug  7 16:26:33.963: INFO: PersistentVolumeClaim hostpath.csi.k8s.io4dgcr found but phase is Pending instead of Bound.
Aug  7 16:26:35.970: INFO: PersistentVolumeClaim hostpath.csi.k8s.io4dgcr found and phase=Bound (2.010039235s)
STEP: Expanding non-expandable pvc
Aug  7 16:26:35.976: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Aug  7 16:26:35.984: INFO: Error updating pvc hostpath.csi.k8s.io4dgcr: persistentvolumeclaims "hostpath.csi.k8s.io4dgcr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:26:37.992: INFO: Error updating pvc hostpath.csi.k8s.io4dgcr: persistentvolumeclaims "hostpath.csi.k8s.io4dgcr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:26:39.992: INFO: Error updating pvc hostpath.csi.k8s.io4dgcr: persistentvolumeclaims "hostpath.csi.k8s.io4dgcr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:26:41.992: INFO: Error updating pvc hostpath.csi.k8s.io4dgcr: persistentvolumeclaims "hostpath.csi.k8s.io4dgcr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:26:43.994: INFO: Error updating pvc hostpath.csi.k8s.io4dgcr: persistentvolumeclaims "hostpath.csi.k8s.io4dgcr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:26:45.993: INFO: Error updating pvc hostpath.csi.k8s.io4dgcr: persistentvolumeclaims "hostpath.csi.k8s.io4dgcr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:26:47.992: INFO: Error updating pvc hostpath.csi.k8s.io4dgcr: persistentvolumeclaims "hostpath.csi.k8s.io4dgcr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:26:49.996: INFO: Error updating pvc hostpath.csi.k8s.io4dgcr: persistentvolumeclaims "hostpath.csi.k8s.io4dgcr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:26:52.025: INFO: Error updating pvc hostpath.csi.k8s.io4dgcr: persistentvolumeclaims "hostpath.csi.k8s.io4dgcr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:26:53.992: INFO: Error updating pvc hostpath.csi.k8s.io4dgcr: persistentvolumeclaims "hostpath.csi.k8s.io4dgcr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:26:55.992: INFO: Error updating pvc hostpath.csi.k8s.io4dgcr: persistentvolumeclaims "hostpath.csi.k8s.io4dgcr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:26:57.992: INFO: Error updating pvc hostpath.csi.k8s.io4dgcr: persistentvolumeclaims "hostpath.csi.k8s.io4dgcr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:26:59.994: INFO: Error updating pvc hostpath.csi.k8s.io4dgcr: persistentvolumeclaims "hostpath.csi.k8s.io4dgcr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:27:01.992: INFO: Error updating pvc hostpath.csi.k8s.io4dgcr: persistentvolumeclaims "hostpath.csi.k8s.io4dgcr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:27:03.996: INFO: Error updating pvc hostpath.csi.k8s.io4dgcr: persistentvolumeclaims "hostpath.csi.k8s.io4dgcr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:27:05.992: INFO: Error updating pvc hostpath.csi.k8s.io4dgcr: persistentvolumeclaims "hostpath.csi.k8s.io4dgcr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:27:05.998: INFO: Error updating pvc hostpath.csi.k8s.io4dgcr: persistentvolumeclaims "hostpath.csi.k8s.io4dgcr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Aug  7 16:27:05.998: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.io4dgcr"
Aug  7 16:27:06.002: INFO: Waiting up to 5m0s for PersistentVolume pvc-b0d27137-6fd7-499f-a455-6c5261e482c7 to get deleted
Aug  7 16:27:06.005: INFO: PersistentVolume pvc-b0d27137-6fd7-499f-a455-6c5261e482c7 found and phase=Bound (2.427003ms)
Aug  7 16:27:11.009: INFO: PersistentVolume pvc-b0d27137-6fd7-499f-a455-6c5261e482c7 was removed
STEP: Deleting sc
... skipping 8 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] volume-expand
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should not allow expansion of pvcs without AllowVolumeExpansion property
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":1,"skipped":60,"failed":0}

SSSSSSSSSSSS
------------------------------
External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should support existing directory
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
... skipping 17 lines ...
Aug  7 16:26:32.138: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  7 16:26:32.258: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iosbndp] to have phase Bound
Aug  7 16:26:32.359: INFO: PersistentVolumeClaim hostpath.csi.k8s.iosbndp found but phase is Pending instead of Bound.
Aug  7 16:26:34.364: INFO: PersistentVolumeClaim hostpath.csi.k8s.iosbndp found and phase=Bound (2.1062547s)
STEP: Creating pod pod-subpath-test-dynamicpv-mns9
STEP: Creating a pod to test subpath
Aug  7 16:26:34.381: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-mns9" in namespace "provisioning-8940" to be "Succeeded or Failed"
Aug  7 16:26:34.388: INFO: Pod "pod-subpath-test-dynamicpv-mns9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.077261ms
Aug  7 16:26:36.392: INFO: Pod "pod-subpath-test-dynamicpv-mns9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01183024s
Aug  7 16:26:38.396: INFO: Pod "pod-subpath-test-dynamicpv-mns9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015337686s
Aug  7 16:26:40.400: INFO: Pod "pod-subpath-test-dynamicpv-mns9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019053876s
Aug  7 16:26:42.404: INFO: Pod "pod-subpath-test-dynamicpv-mns9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023105549s
Aug  7 16:26:44.408: INFO: Pod "pod-subpath-test-dynamicpv-mns9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.027287444s
... skipping 25 lines ...
Aug  7 16:27:36.516: INFO: Pod "pod-subpath-test-dynamicpv-mns9": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.135180982s
Aug  7 16:27:38.521: INFO: Pod "pod-subpath-test-dynamicpv-mns9": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.13990436s
Aug  7 16:27:40.525: INFO: Pod "pod-subpath-test-dynamicpv-mns9": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.144320941s
Aug  7 16:27:42.528: INFO: Pod "pod-subpath-test-dynamicpv-mns9": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.147680007s
Aug  7 16:27:44.533: INFO: Pod "pod-subpath-test-dynamicpv-mns9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m10.152304711s
STEP: Saw pod success
Aug  7 16:27:44.533: INFO: Pod "pod-subpath-test-dynamicpv-mns9" satisfied condition "Succeeded or Failed"
Aug  7 16:27:44.536: INFO: Trying to get logs from node csi-prow-worker2 pod pod-subpath-test-dynamicpv-mns9 container test-container-volume-dynamicpv-mns9: <nil>
STEP: delete the pod
Aug  7 16:27:44.556: INFO: Waiting for pod pod-subpath-test-dynamicpv-mns9 to disappear
Aug  7 16:27:44.559: INFO: Pod pod-subpath-test-dynamicpv-mns9 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-mns9
Aug  7 16:27:44.559: INFO: Deleting pod "pod-subpath-test-dynamicpv-mns9" in namespace "provisioning-8940"
... skipping 16 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support existing directory
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":1,"skipped":17,"failed":0}

SSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should support existing single file [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
... skipping 17 lines ...
Aug  7 16:26:33.504: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  7 16:26:33.509: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iod2stl] to have phase Bound
Aug  7 16:26:33.513: INFO: PersistentVolumeClaim hostpath.csi.k8s.iod2stl found but phase is Pending instead of Bound.
Aug  7 16:26:35.520: INFO: PersistentVolumeClaim hostpath.csi.k8s.iod2stl found and phase=Bound (2.010108479s)
STEP: Creating pod pod-subpath-test-dynamicpv-v67s
STEP: Creating a pod to test subpath
Aug  7 16:26:35.529: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-v67s" in namespace "provisioning-5068" to be "Succeeded or Failed"
Aug  7 16:26:35.532: INFO: Pod "pod-subpath-test-dynamicpv-v67s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.705757ms
Aug  7 16:26:37.537: INFO: Pod "pod-subpath-test-dynamicpv-v67s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007444746s
Aug  7 16:26:39.541: INFO: Pod "pod-subpath-test-dynamicpv-v67s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011085869s
Aug  7 16:26:41.545: INFO: Pod "pod-subpath-test-dynamicpv-v67s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01561815s
Aug  7 16:26:43.550: INFO: Pod "pod-subpath-test-dynamicpv-v67s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020114889s
Aug  7 16:26:45.554: INFO: Pod "pod-subpath-test-dynamicpv-v67s": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024720459s
... skipping 34 lines ...
Aug  7 16:27:55.712: INFO: Pod "pod-subpath-test-dynamicpv-v67s": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.182269812s
Aug  7 16:27:57.715: INFO: Pod "pod-subpath-test-dynamicpv-v67s": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.185500692s
Aug  7 16:27:59.719: INFO: Pod "pod-subpath-test-dynamicpv-v67s": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.189093618s
Aug  7 16:28:01.725: INFO: Pod "pod-subpath-test-dynamicpv-v67s": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.195870752s
Aug  7 16:28:03.729: INFO: Pod "pod-subpath-test-dynamicpv-v67s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m28.199605258s
STEP: Saw pod success
Aug  7 16:28:03.729: INFO: Pod "pod-subpath-test-dynamicpv-v67s" satisfied condition "Succeeded or Failed"
Aug  7 16:28:03.732: INFO: Trying to get logs from node csi-prow-worker2 pod pod-subpath-test-dynamicpv-v67s container test-container-subpath-dynamicpv-v67s: <nil>
STEP: delete the pod
Aug  7 16:28:03.752: INFO: Waiting for pod pod-subpath-test-dynamicpv-v67s to disappear
Aug  7 16:28:03.755: INFO: Pod pod-subpath-test-dynamicpv-v67s no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-v67s
Aug  7 16:28:03.755: INFO: Deleting pod "pod-subpath-test-dynamicpv-v67s" in namespace "provisioning-5068"
... skipping 16 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support existing single file [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":1,"skipped":67,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 16:28:08.793: INFO: Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping
... skipping 59 lines ...
Aug  7 16:26:34.703: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  7 16:26:34.710: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iogspgg] to have phase Bound
Aug  7 16:26:34.714: INFO: PersistentVolumeClaim hostpath.csi.k8s.iogspgg found but phase is Pending instead of Bound.
Aug  7 16:26:36.718: INFO: PersistentVolumeClaim hostpath.csi.k8s.iogspgg found and phase=Bound (2.007405273s)
STEP: Creating pod pod-subpath-test-dynamicpv-dh6r
STEP: Creating a pod to test subpath
Aug  7 16:26:36.727: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-dh6r" in namespace "provisioning-2012" to be "Succeeded or Failed"
Aug  7 16:26:36.732: INFO: Pod "pod-subpath-test-dynamicpv-dh6r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.295645ms
Aug  7 16:26:38.736: INFO: Pod "pod-subpath-test-dynamicpv-dh6r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008124446s
Aug  7 16:26:40.748: INFO: Pod "pod-subpath-test-dynamicpv-dh6r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020189976s
Aug  7 16:26:42.752: INFO: Pod "pod-subpath-test-dynamicpv-dh6r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024086392s
Aug  7 16:26:44.755: INFO: Pod "pod-subpath-test-dynamicpv-dh6r": Phase="Pending", Reason="", readiness=false. Elapsed: 8.027656212s
Aug  7 16:26:46.759: INFO: Pod "pod-subpath-test-dynamicpv-dh6r": Phase="Pending", Reason="", readiness=false. Elapsed: 10.031657215s
... skipping 36 lines ...
Aug  7 16:28:00.921: INFO: Pod "pod-subpath-test-dynamicpv-dh6r": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.193728259s
Aug  7 16:28:02.925: INFO: Pod "pod-subpath-test-dynamicpv-dh6r": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.197895178s
Aug  7 16:28:04.932: INFO: Pod "pod-subpath-test-dynamicpv-dh6r": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.204646385s
Aug  7 16:28:06.937: INFO: Pod "pod-subpath-test-dynamicpv-dh6r": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.209109802s
Aug  7 16:28:08.941: INFO: Pod "pod-subpath-test-dynamicpv-dh6r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m32.213358114s
STEP: Saw pod success
Aug  7 16:28:08.941: INFO: Pod "pod-subpath-test-dynamicpv-dh6r" satisfied condition "Succeeded or Failed"
Aug  7 16:28:08.944: INFO: Trying to get logs from node csi-prow-worker2 pod pod-subpath-test-dynamicpv-dh6r container test-container-subpath-dynamicpv-dh6r: <nil>
STEP: delete the pod
Aug  7 16:28:08.959: INFO: Waiting for pod pod-subpath-test-dynamicpv-dh6r to disappear
Aug  7 16:28:08.966: INFO: Pod pod-subpath-test-dynamicpv-dh6r no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-dh6r
Aug  7 16:28:08.966: INFO: Deleting pod "pod-subpath-test-dynamicpv-dh6r" in namespace "provisioning-2012"
... skipping 16 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support readOnly file specified in the volumeMount [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":96,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 16:28:14.024: INFO: Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping
... skipping 185 lines ...
Aug  7 16:26:33.193: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  7 16:26:33.220: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io4wdc4] to have phase Bound
Aug  7 16:26:33.225: INFO: PersistentVolumeClaim hostpath.csi.k8s.io4wdc4 found but phase is Pending instead of Bound.
Aug  7 16:26:35.231: INFO: PersistentVolumeClaim hostpath.csi.k8s.io4wdc4 found and phase=Bound (2.011449579s)
STEP: Creating pod pod-subpath-test-dynamicpv-2vvs
STEP: Creating a pod to test subpath
Aug  7 16:26:35.252: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-2vvs" in namespace "provisioning-823" to be "Succeeded or Failed"
Aug  7 16:26:35.258: INFO: Pod "pod-subpath-test-dynamicpv-2vvs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.939903ms
Aug  7 16:26:37.264: INFO: Pod "pod-subpath-test-dynamicpv-2vvs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011391012s
Aug  7 16:26:39.269: INFO: Pod "pod-subpath-test-dynamicpv-2vvs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016677508s
Aug  7 16:26:41.276: INFO: Pod "pod-subpath-test-dynamicpv-2vvs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0234476s
Aug  7 16:26:43.280: INFO: Pod "pod-subpath-test-dynamicpv-2vvs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.027063523s
Aug  7 16:26:45.284: INFO: Pod "pod-subpath-test-dynamicpv-2vvs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.031731817s
... skipping 38 lines ...
Aug  7 16:28:03.451: INFO: Pod "pod-subpath-test-dynamicpv-2vvs": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.198079213s
Aug  7 16:28:05.456: INFO: Pod "pod-subpath-test-dynamicpv-2vvs": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.203435057s
Aug  7 16:28:07.461: INFO: Pod "pod-subpath-test-dynamicpv-2vvs": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.207835867s
Aug  7 16:28:09.466: INFO: Pod "pod-subpath-test-dynamicpv-2vvs": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.212925444s
Aug  7 16:28:11.471: INFO: Pod "pod-subpath-test-dynamicpv-2vvs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m36.217799098s
STEP: Saw pod success
Aug  7 16:28:11.471: INFO: Pod "pod-subpath-test-dynamicpv-2vvs" satisfied condition "Succeeded or Failed"
Aug  7 16:28:11.473: INFO: Trying to get logs from node csi-prow-worker2 pod pod-subpath-test-dynamicpv-2vvs container test-container-volume-dynamicpv-2vvs: <nil>
STEP: delete the pod
Aug  7 16:28:11.488: INFO: Waiting for pod pod-subpath-test-dynamicpv-2vvs to disappear
Aug  7 16:28:11.491: INFO: Pod pod-subpath-test-dynamicpv-2vvs no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-2vvs
Aug  7 16:28:11.491: INFO: Deleting pod "pod-subpath-test-dynamicpv-2vvs" in namespace "provisioning-823"
... skipping 16 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support non-existent path
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":57,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 16:28:16.529: INFO: Driver "hostpath.csi.k8s.io" does not support volume type "InlineVolume" - skipping
... skipping 81 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should create read/write inline ephemeral volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:161
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":1,"skipped":114,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 16:28:26.145: INFO: Driver "hostpath.csi.k8s.io" does not support volume type "InlineVolume" - skipping
... skipping 34 lines ...

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode 
  should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296

[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  7 16:26:32.846: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volumemode
W0807 16:26:36.095800   64515 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Aug  7 16:26:36.095: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296
Aug  7 16:26:36.099: INFO: Creating resource for dynamic PV
Aug  7 16:26:36.099: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(hostpath.csi.k8s.io) supported size:{ 1Mi} 
STEP: creating a StorageClass volumemode-4801-e2e-schzql5
STEP: creating a claim
Aug  7 16:26:36.111: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io2wzzr] to have phase Bound
Aug  7 16:26:36.117: INFO: PersistentVolumeClaim hostpath.csi.k8s.io2wzzr found but phase is Pending instead of Bound.
Aug  7 16:26:38.120: INFO: PersistentVolumeClaim hostpath.csi.k8s.io2wzzr found but phase is Pending instead of Bound.
Aug  7 16:26:40.123: INFO: PersistentVolumeClaim hostpath.csi.k8s.io2wzzr found and phase=Bound (4.012496257s)
STEP: Creating pod
STEP: Waiting for the pod to fail
Aug  7 16:27:08.160: INFO: Deleting pod "pod-66819c10-1e40-4a13-9de8-a4379525d69f" in namespace "volumemode-4801"
Aug  7 16:27:08.165: INFO: Wait up to 5m0s for pod "pod-66819c10-1e40-4a13-9de8-a4379525d69f" to be fully deleted
STEP: Deleting pvc
Aug  7 16:28:22.172: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.io2wzzr"
Aug  7 16:28:22.178: INFO: Waiting up to 5m0s for PersistentVolume pvc-d1366ad9-e7df-4572-b016-53b55a9c564c to get deleted
Aug  7 16:28:22.182: INFO: PersistentVolume pvc-d1366ad9-e7df-4572-b016-53b55a9c564c found and phase=Bound (4.098361ms)
... skipping 7 lines ...

• [SLOW TEST:114.352 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to use a volume in a pod with mismatched mode [Slow]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]","total":-1,"completed":1,"skipped":159,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 16:28:27.292: INFO: Driver hostpath.csi.k8s.io doesn't support ext3 -- skipping
... skipping 101 lines ...
Aug  7 16:26:34.254: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  7 16:26:34.260: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io6hr6t] to have phase Bound
Aug  7 16:26:34.264: INFO: PersistentVolumeClaim hostpath.csi.k8s.io6hr6t found but phase is Pending instead of Bound.
Aug  7 16:26:36.283: INFO: PersistentVolumeClaim hostpath.csi.k8s.io6hr6t found and phase=Bound (2.022140977s)
STEP: Creating pod exec-volume-test-dynamicpv-ptlv
STEP: Creating a pod to test exec-volume-test
Aug  7 16:26:36.302: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-ptlv" in namespace "volume-2713" to be "Succeeded or Failed"
Aug  7 16:26:36.312: INFO: Pod "exec-volume-test-dynamicpv-ptlv": Phase="Pending", Reason="", readiness=false. Elapsed: 9.139869ms
Aug  7 16:26:38.315: INFO: Pod "exec-volume-test-dynamicpv-ptlv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012740111s
Aug  7 16:26:40.322: INFO: Pod "exec-volume-test-dynamicpv-ptlv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019609882s
Aug  7 16:26:42.327: INFO: Pod "exec-volume-test-dynamicpv-ptlv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024192112s
Aug  7 16:26:44.330: INFO: Pod "exec-volume-test-dynamicpv-ptlv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.027931207s
Aug  7 16:26:46.335: INFO: Pod "exec-volume-test-dynamicpv-ptlv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.032289823s
... skipping 46 lines ...
Aug  7 16:28:20.537: INFO: Pod "exec-volume-test-dynamicpv-ptlv": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.234473411s
Aug  7 16:28:22.541: INFO: Pod "exec-volume-test-dynamicpv-ptlv": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.238713459s
Aug  7 16:28:24.545: INFO: Pod "exec-volume-test-dynamicpv-ptlv": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.242724487s
Aug  7 16:28:26.549: INFO: Pod "exec-volume-test-dynamicpv-ptlv": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.246926662s
Aug  7 16:28:28.554: INFO: Pod "exec-volume-test-dynamicpv-ptlv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m52.251620794s
STEP: Saw pod success
Aug  7 16:28:28.554: INFO: Pod "exec-volume-test-dynamicpv-ptlv" satisfied condition "Succeeded or Failed"
Aug  7 16:28:28.558: INFO: Trying to get logs from node csi-prow-worker2 pod exec-volume-test-dynamicpv-ptlv container exec-container-dynamicpv-ptlv: <nil>
STEP: delete the pod
Aug  7 16:28:28.572: INFO: Waiting for pod exec-volume-test-dynamicpv-ptlv to disappear
Aug  7 16:28:28.578: INFO: Pod exec-volume-test-dynamicpv-ptlv no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-ptlv
Aug  7 16:28:28.578: INFO: Deleting pod "exec-volume-test-dynamicpv-ptlv" in namespace "volume-2713"
... skipping 14 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should allow exec of files on the volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":102,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 16:28:33.638: INFO: Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping
... skipping 3 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to use a volume in a pod with mismatched mode [Slow] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296

    Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 85 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support multiple inline ephemeral volumes
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:211
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":1,"skipped":78,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 16:28:45.708: INFO: Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping
... skipping 76 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Inline-volume (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256

    Driver "hostpath.csi.k8s.io" does not support volume type "InlineVolume" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 89 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should create read-only inline ephemeral volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":1,"skipped":17,"failed":0}
Aug  7 16:28:49.215: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  7 16:26:34.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267
Aug  7 16:26:37.396: INFO: Creating resource for dynamic PV
Aug  7 16:26:37.396: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(hostpath.csi.k8s.io) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-2870-e2e-scdf7vr
STEP: creating a claim
Aug  7 16:26:37.399: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  7 16:26:37.405: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iorx84t] to have phase Bound
Aug  7 16:26:37.409: INFO: PersistentVolumeClaim hostpath.csi.k8s.iorx84t found but phase is Pending instead of Bound.
Aug  7 16:26:39.412: INFO: PersistentVolumeClaim hostpath.csi.k8s.iorx84t found but phase is Pending instead of Bound.
Aug  7 16:26:41.415: INFO: PersistentVolumeClaim hostpath.csi.k8s.iorx84t found and phase=Bound (4.009815114s)
STEP: Creating pod pod-subpath-test-dynamicpv-4t2g
STEP: Checking for subpath error in container status
Aug  7 16:28:17.436: INFO: Deleting pod "pod-subpath-test-dynamicpv-4t2g" in namespace "provisioning-2870"
Aug  7 16:28:17.440: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-4t2g" to be fully deleted
STEP: Deleting pod
Aug  7 16:28:51.448: INFO: Deleting pod "pod-subpath-test-dynamicpv-4t2g" in namespace "provisioning-2870"
STEP: Deleting pvc
Aug  7 16:28:51.451: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.iorx84t"
... skipping 9 lines ...

• [SLOW TEST:142.055 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]","total":-1,"completed":1,"skipped":180,"failed":0}
Aug  7 16:28:56.477: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] volume-expand 
  should not allow expansion of pvcs without AllowVolumeExpansion property
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
... skipping 15 lines ...
Aug  7 16:28:33.880: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  7 16:28:33.889: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iolbh8s] to have phase Bound
Aug  7 16:28:33.893: INFO: PersistentVolumeClaim hostpath.csi.k8s.iolbh8s found but phase is Pending instead of Bound.
Aug  7 16:28:35.897: INFO: PersistentVolumeClaim hostpath.csi.k8s.iolbh8s found and phase=Bound (2.007740235s)
STEP: Expanding non-expandable pvc
Aug  7 16:28:35.904: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Aug  7 16:28:35.913: INFO: Error updating pvc hostpath.csi.k8s.iolbh8s: persistentvolumeclaims "hostpath.csi.k8s.iolbh8s" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:28:37.921: INFO: Error updating pvc hostpath.csi.k8s.iolbh8s: persistentvolumeclaims "hostpath.csi.k8s.iolbh8s" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:28:39.922: INFO: Error updating pvc hostpath.csi.k8s.iolbh8s: persistentvolumeclaims "hostpath.csi.k8s.iolbh8s" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:28:41.921: INFO: Error updating pvc hostpath.csi.k8s.iolbh8s: persistentvolumeclaims "hostpath.csi.k8s.iolbh8s" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:28:43.924: INFO: Error updating pvc hostpath.csi.k8s.iolbh8s: persistentvolumeclaims "hostpath.csi.k8s.iolbh8s" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:28:45.923: INFO: Error updating pvc hostpath.csi.k8s.iolbh8s: persistentvolumeclaims "hostpath.csi.k8s.iolbh8s" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:28:47.923: INFO: Error updating pvc hostpath.csi.k8s.iolbh8s: persistentvolumeclaims "hostpath.csi.k8s.iolbh8s" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:28:49.923: INFO: Error updating pvc hostpath.csi.k8s.iolbh8s: persistentvolumeclaims "hostpath.csi.k8s.iolbh8s" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:28:51.922: INFO: Error updating pvc hostpath.csi.k8s.iolbh8s: persistentvolumeclaims "hostpath.csi.k8s.iolbh8s" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:28:53.923: INFO: Error updating pvc hostpath.csi.k8s.iolbh8s: persistentvolumeclaims "hostpath.csi.k8s.iolbh8s" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:28:55.921: INFO: Error updating pvc hostpath.csi.k8s.iolbh8s: persistentvolumeclaims "hostpath.csi.k8s.iolbh8s" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:28:57.923: INFO: Error updating pvc hostpath.csi.k8s.iolbh8s: persistentvolumeclaims "hostpath.csi.k8s.iolbh8s" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:28:59.925: INFO: Error updating pvc hostpath.csi.k8s.iolbh8s: persistentvolumeclaims "hostpath.csi.k8s.iolbh8s" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:29:01.922: INFO: Error updating pvc hostpath.csi.k8s.iolbh8s: persistentvolumeclaims "hostpath.csi.k8s.iolbh8s" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:29:03.926: INFO: Error updating pvc hostpath.csi.k8s.iolbh8s: persistentvolumeclaims "hostpath.csi.k8s.iolbh8s" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:29:05.924: INFO: Error updating pvc hostpath.csi.k8s.iolbh8s: persistentvolumeclaims "hostpath.csi.k8s.iolbh8s" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  7 16:29:05.933: INFO: Error updating pvc hostpath.csi.k8s.iolbh8s: persistentvolumeclaims "hostpath.csi.k8s.iolbh8s" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Aug  7 16:29:05.933: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.iolbh8s"
Aug  7 16:29:05.937: INFO: Waiting up to 5m0s for PersistentVolume pvc-c6174276-dbb9-4dcc-b8d6-3c21c94a5c56 to get deleted
Aug  7 16:29:05.944: INFO: PersistentVolume pvc-c6174276-dbb9-4dcc-b8d6-3c21c94a5c56 found and phase=Bound (7.008933ms)
Aug  7 16:29:10.949: INFO: PersistentVolume pvc-c6174276-dbb9-4dcc-b8d6-3c21c94a5c56 was removed
STEP: Deleting sc
... skipping 8 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] volume-expand
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should not allow expansion of pvcs without AllowVolumeExpansion property
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":2,"skipped":199,"failed":0}
Aug  7 16:29:10.965: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if subpath file is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  7 16:26:33.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath file is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256
Aug  7 16:26:36.650: INFO: Creating resource for dynamic PV
Aug  7 16:26:36.650: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(hostpath.csi.k8s.io) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-3408-e2e-scdgpmf
STEP: creating a claim
Aug  7 16:26:36.658: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  7 16:26:36.667: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iop2l76] to have phase Bound
Aug  7 16:26:36.669: INFO: PersistentVolumeClaim hostpath.csi.k8s.iop2l76 found but phase is Pending instead of Bound.
Aug  7 16:26:38.674: INFO: PersistentVolumeClaim hostpath.csi.k8s.iop2l76 found but phase is Pending instead of Bound.
Aug  7 16:26:40.677: INFO: PersistentVolumeClaim hostpath.csi.k8s.iop2l76 found and phase=Bound (4.010506493s)
STEP: Creating pod pod-subpath-test-dynamicpv-6h8s
STEP: Checking for subpath error in container status
Aug  7 16:28:14.699: INFO: Deleting pod "pod-subpath-test-dynamicpv-6h8s" in namespace "provisioning-3408"
Aug  7 16:28:14.704: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-6h8s" to be fully deleted
STEP: Deleting pod
Aug  7 16:29:10.713: INFO: Deleting pod "pod-subpath-test-dynamicpv-6h8s" in namespace "provisioning-3408"
STEP: Deleting pvc
Aug  7 16:29:10.715: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.iop2l76"
... skipping 9 lines ...

• [SLOW TEST:162.744 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath file is outside the volume [Slow][LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]","total":-1,"completed":1,"skipped":58,"failed":0}
Aug  7 16:29:15.748: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should be able to unmount after the subpath directory is deleted [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
... skipping 44 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should be able to unmount after the subpath directory is deleted [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":1,"skipped":105,"failed":0}
Aug  7 16:29:18.614: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should support file as subpath [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
... skipping 16 lines ...
Aug  7 16:26:38.057: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.ioslgnn] to have phase Bound
Aug  7 16:26:38.060: INFO: PersistentVolumeClaim hostpath.csi.k8s.ioslgnn found but phase is Pending instead of Bound.
Aug  7 16:26:40.064: INFO: PersistentVolumeClaim hostpath.csi.k8s.ioslgnn found but phase is Pending instead of Bound.
Aug  7 16:26:42.069: INFO: PersistentVolumeClaim hostpath.csi.k8s.ioslgnn found and phase=Bound (4.012185046s)
STEP: Creating pod pod-subpath-test-dynamicpv-lch4
STEP: Creating a pod to test atomic-volume-subpath
Aug  7 16:26:42.080: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-lch4" in namespace "provisioning-4973" to be "Succeeded or Failed"
Aug  7 16:26:42.085: INFO: Pod "pod-subpath-test-dynamicpv-lch4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.475731ms
Aug  7 16:26:44.089: INFO: Pod "pod-subpath-test-dynamicpv-lch4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007839317s
Aug  7 16:26:46.094: INFO: Pod "pod-subpath-test-dynamicpv-lch4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012670433s
Aug  7 16:26:48.098: INFO: Pod "pod-subpath-test-dynamicpv-lch4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016909043s
Aug  7 16:26:50.103: INFO: Pod "pod-subpath-test-dynamicpv-lch4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021414799s
Aug  7 16:26:52.109: INFO: Pod "pod-subpath-test-dynamicpv-lch4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.027694514s
... skipping 67 lines ...
Aug  7 16:29:08.405: INFO: Pod "pod-subpath-test-dynamicpv-lch4": Phase="Running", Reason="", readiness=true. Elapsed: 2m26.323884956s
Aug  7 16:29:10.409: INFO: Pod "pod-subpath-test-dynamicpv-lch4": Phase="Running", Reason="", readiness=true. Elapsed: 2m28.327967517s
Aug  7 16:29:12.413: INFO: Pod "pod-subpath-test-dynamicpv-lch4": Phase="Running", Reason="", readiness=true. Elapsed: 2m30.331806945s
Aug  7 16:29:14.420: INFO: Pod "pod-subpath-test-dynamicpv-lch4": Phase="Running", Reason="", readiness=true. Elapsed: 2m32.33876122s
Aug  7 16:29:16.425: INFO: Pod "pod-subpath-test-dynamicpv-lch4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m34.343200329s
STEP: Saw pod success
Aug  7 16:29:16.425: INFO: Pod "pod-subpath-test-dynamicpv-lch4" satisfied condition "Succeeded or Failed"
Aug  7 16:29:16.428: INFO: Trying to get logs from node csi-prow-worker2 pod pod-subpath-test-dynamicpv-lch4 container test-container-subpath-dynamicpv-lch4: <nil>
STEP: delete the pod
Aug  7 16:29:16.443: INFO: Waiting for pod pod-subpath-test-dynamicpv-lch4 to disappear
Aug  7 16:29:16.449: INFO: Pod pod-subpath-test-dynamicpv-lch4 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-lch4
Aug  7 16:29:16.449: INFO: Deleting pod "pod-subpath-test-dynamicpv-lch4" in namespace "provisioning-4973"
... skipping 16 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support file as subpath [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":336,"failed":0}
Aug  7 16:29:21.480: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] volumeIO 
  should write files of various sizes, verify size, validate content [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_io.go:146
... skipping 42 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] volumeIO
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should write files of various sizes, verify size, validate content [Slow][LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_io.go:146
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow][LinuxOnly]","total":-1,"completed":1,"skipped":173,"failed":0}
Aug  7 16:29:23.647: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  7 16:26:32.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
W0807 16:26:35.745079   64703 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Aug  7 16:26:35.745: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278
Aug  7 16:26:35.748: INFO: Creating resource for dynamic PV
Aug  7 16:26:35.748: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(hostpath.csi.k8s.io) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-6440-e2e-sczwcdm
STEP: creating a claim
Aug  7 16:26:35.752: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  7 16:26:35.761: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iokp2qj] to have phase Bound
Aug  7 16:26:35.771: INFO: PersistentVolumeClaim hostpath.csi.k8s.iokp2qj found but phase is Pending instead of Bound.
Aug  7 16:26:37.774: INFO: PersistentVolumeClaim hostpath.csi.k8s.iokp2qj found but phase is Pending instead of Bound.
Aug  7 16:26:39.778: INFO: PersistentVolumeClaim hostpath.csi.k8s.iokp2qj found and phase=Bound (4.016241041s)
STEP: Creating pod pod-subpath-test-dynamicpv-7jhc
STEP: Checking for subpath error in container status
Aug  7 16:28:37.795: INFO: Deleting pod "pod-subpath-test-dynamicpv-7jhc" in namespace "provisioning-6440"
Aug  7 16:28:37.800: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-7jhc" to be fully deleted
STEP: Deleting pod
Aug  7 16:29:27.809: INFO: Deleting pod "pod-subpath-test-dynamicpv-7jhc" in namespace "provisioning-6440"
STEP: Deleting pvc
Aug  7 16:29:27.812: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.iokp2qj"
... skipping 9 lines ...

• [SLOW TEST:180.108 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]","total":-1,"completed":1,"skipped":116,"failed":0}
Aug  7 16:29:32.841: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral 
  should create read/write inline ephemeral volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:161
... skipping 38 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should create read/write inline ephemeral volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:161
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":1,"skipped":66,"failed":0}
Aug  7 16:29:34.082: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] volumeMode 
  should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296

[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  7 16:28:27.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volumemode
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296
Aug  7 16:28:27.414: INFO: Creating resource for dynamic PV
Aug  7 16:28:27.414: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(hostpath.csi.k8s.io) supported size:{ 1Mi} 
STEP: creating a StorageClass volumemode-2722-e2e-scgb9l8
STEP: creating a claim
Aug  7 16:28:27.439: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.ionlnrl] to have phase Bound
Aug  7 16:28:27.442: INFO: PersistentVolumeClaim hostpath.csi.k8s.ionlnrl found but phase is Pending instead of Bound.
Aug  7 16:28:29.446: INFO: PersistentVolumeClaim hostpath.csi.k8s.ionlnrl found and phase=Bound (2.007100277s)
STEP: Creating pod
STEP: Waiting for the pod to fail
Aug  7 16:28:47.468: INFO: Deleting pod "pod-36097e52-12bf-4e98-83b9-dbbeb510aaaf" in namespace "volumemode-2722"
Aug  7 16:28:47.474: INFO: Wait up to 5m0s for pod "pod-36097e52-12bf-4e98-83b9-dbbeb510aaaf" to be fully deleted
STEP: Deleting pvc
Aug  7 16:29:29.482: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.ionlnrl"
Aug  7 16:29:29.488: INFO: Waiting up to 5m0s for PersistentVolume pvc-0ca5e74d-bee9-425e-bb96-864581d9aff6 to get deleted
Aug  7 16:29:29.491: INFO: PersistentVolume pvc-0ca5e74d-bee9-425e-bb96-864581d9aff6 found and phase=Bound (2.967472ms)
... skipping 7 lines ...

• [SLOW TEST:67.132 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to use a volume in a pod with mismatched mode [Slow]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]","total":-1,"completed":2,"skipped":269,"failed":0}
Aug  7 16:29:34.510: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral 
  should create read-only inline ephemeral volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
... skipping 36 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should create read-only inline ephemeral volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":1,"skipped":77,"failed":0}
Aug  7 16:29:40.600: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] volumeMode 
  should not mount / map unused volumes in a pod [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
... skipping 48 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should not mount / map unused volumes in a pod [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":48,"failed":0}
Aug  7 16:29:42.663: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand 
  should resize volume when PVC is edited while pod is using it
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
... skipping 44 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should resize volume when PVC is edited while pod is using it
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":1,"skipped":76,"failed":0}
Aug  7 16:29:50.322: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if subpath directory is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  7 16:26:36.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath directory is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240
Aug  7 16:26:37.993: INFO: Creating resource for dynamic PV
Aug  7 16:26:37.994: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(hostpath.csi.k8s.io) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-578-e2e-scwvx6s
STEP: creating a claim
Aug  7 16:26:37.997: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  7 16:26:38.005: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io4s2bq] to have phase Bound
Aug  7 16:26:38.008: INFO: PersistentVolumeClaim hostpath.csi.k8s.io4s2bq found but phase is Pending instead of Bound.
Aug  7 16:26:40.012: INFO: PersistentVolumeClaim hostpath.csi.k8s.io4s2bq found but phase is Pending instead of Bound.
Aug  7 16:26:42.018: INFO: PersistentVolumeClaim hostpath.csi.k8s.io4s2bq found and phase=Bound (4.012720781s)
STEP: Creating pod pod-subpath-test-dynamicpv-js5v
STEP: Checking for subpath error in container status
Aug  7 16:29:04.045: INFO: Deleting pod "pod-subpath-test-dynamicpv-js5v" in namespace "provisioning-578"
Aug  7 16:29:04.049: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-js5v" to be fully deleted
STEP: Deleting pod
Aug  7 16:29:46.057: INFO: Deleting pod "pod-subpath-test-dynamicpv-js5v" in namespace "provisioning-578"
STEP: Deleting pvc
Aug  7 16:29:46.060: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.io4s2bq"
... skipping 9 lines ...

• [SLOW TEST:194.889 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath directory is outside the volume [Slow][LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]","total":-1,"completed":1,"skipped":416,"failed":0}
Aug  7 16:29:51.092: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral 
  should create read-only inline ephemeral volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
... skipping 38 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should create read-only inline ephemeral volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":1,"skipped":27,"failed":0}
Aug  7 16:29:51.367: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should support readOnly directory specified in the volumeMount
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
... skipping 15 lines ...
Aug  7 16:28:14.454: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  7 16:28:14.464: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iok45bw] to have phase Bound
Aug  7 16:28:14.468: INFO: PersistentVolumeClaim hostpath.csi.k8s.iok45bw found but phase is Pending instead of Bound.
Aug  7 16:28:16.473: INFO: PersistentVolumeClaim hostpath.csi.k8s.iok45bw found and phase=Bound (2.008106217s)
STEP: Creating pod pod-subpath-test-dynamicpv-zk4w
STEP: Creating a pod to test subpath
Aug  7 16:28:16.485: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-zk4w" in namespace "provisioning-6075" to be "Succeeded or Failed"
Aug  7 16:28:16.492: INFO: Pod "pod-subpath-test-dynamicpv-zk4w": Phase="Pending", Reason="", readiness=false. Elapsed: 7.105153ms
Aug  7 16:28:18.496: INFO: Pod "pod-subpath-test-dynamicpv-zk4w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011074133s
Aug  7 16:28:20.500: INFO: Pod "pod-subpath-test-dynamicpv-zk4w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014649264s
Aug  7 16:28:22.505: INFO: Pod "pod-subpath-test-dynamicpv-zk4w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020077284s
Aug  7 16:28:24.509: INFO: Pod "pod-subpath-test-dynamicpv-zk4w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.024448509s
Aug  7 16:28:26.513: INFO: Pod "pod-subpath-test-dynamicpv-zk4w": Phase="Pending", Reason="", readiness=false. Elapsed: 10.028619105s
... skipping 35 lines ...
Aug  7 16:29:38.673: INFO: Pod "pod-subpath-test-dynamicpv-zk4w": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.188385711s
Aug  7 16:29:40.678: INFO: Pod "pod-subpath-test-dynamicpv-zk4w": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.193136677s
Aug  7 16:29:42.684: INFO: Pod "pod-subpath-test-dynamicpv-zk4w": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.198862727s
Aug  7 16:29:44.689: INFO: Pod "pod-subpath-test-dynamicpv-zk4w": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.203661586s
Aug  7 16:29:46.694: INFO: Pod "pod-subpath-test-dynamicpv-zk4w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m30.209044695s
STEP: Saw pod success
Aug  7 16:29:46.694: INFO: Pod "pod-subpath-test-dynamicpv-zk4w" satisfied condition "Succeeded or Failed"
Aug  7 16:29:46.698: INFO: Trying to get logs from node csi-prow-worker2 pod pod-subpath-test-dynamicpv-zk4w container test-container-subpath-dynamicpv-zk4w: <nil>
STEP: delete the pod
Aug  7 16:29:46.717: INFO: Waiting for pod pod-subpath-test-dynamicpv-zk4w to disappear
Aug  7 16:29:46.720: INFO: Pod pod-subpath-test-dynamicpv-zk4w no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-zk4w
Aug  7 16:29:46.720: INFO: Deleting pod "pod-subpath-test-dynamicpv-zk4w" in namespace "provisioning-6075"
... skipping 16 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support readOnly directory specified in the volumeMount
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":2,"skipped":350,"failed":0}
Aug  7 16:29:51.758: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should support creating multiple subpath from same volumes [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:294
... skipping 15 lines ...
Aug  7 16:28:09.054: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  7 16:28:09.060: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io59phd] to have phase Bound
Aug  7 16:28:09.063: INFO: PersistentVolumeClaim hostpath.csi.k8s.io59phd found but phase is Pending instead of Bound.
Aug  7 16:28:11.068: INFO: PersistentVolumeClaim hostpath.csi.k8s.io59phd found and phase=Bound (2.007282948s)
STEP: Creating pod pod-subpath-test-dynamicpv-s7km
STEP: Creating a pod to test multi_subpath
Aug  7 16:28:11.081: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-s7km" in namespace "provisioning-4414" to be "Succeeded or Failed"
Aug  7 16:28:11.088: INFO: Pod "pod-subpath-test-dynamicpv-s7km": Phase="Pending", Reason="", readiness=false. Elapsed: 4.931983ms
Aug  7 16:28:13.093: INFO: Pod "pod-subpath-test-dynamicpv-s7km": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009708261s
Aug  7 16:28:15.098: INFO: Pod "pod-subpath-test-dynamicpv-s7km": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0148296s
Aug  7 16:28:17.102: INFO: Pod "pod-subpath-test-dynamicpv-s7km": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018516309s
Aug  7 16:28:19.104: INFO: Pod "pod-subpath-test-dynamicpv-s7km": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021070763s
Aug  7 16:28:21.109: INFO: Pod "pod-subpath-test-dynamicpv-s7km": Phase="Pending", Reason="", readiness=false. Elapsed: 10.026182869s
... skipping 40 lines ...
Aug  7 16:29:43.301: INFO: Pod "pod-subpath-test-dynamicpv-s7km": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.217854319s
Aug  7 16:29:45.305: INFO: Pod "pod-subpath-test-dynamicpv-s7km": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.221873196s
Aug  7 16:29:47.309: INFO: Pod "pod-subpath-test-dynamicpv-s7km": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.225813896s
Aug  7 16:29:49.314: INFO: Pod "pod-subpath-test-dynamicpv-s7km": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.230991879s
Aug  7 16:29:51.319: INFO: Pod "pod-subpath-test-dynamicpv-s7km": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m40.235779775s
STEP: Saw pod success
Aug  7 16:29:51.319: INFO: Pod "pod-subpath-test-dynamicpv-s7km" satisfied condition "Succeeded or Failed"
Aug  7 16:29:51.322: INFO: Trying to get logs from node csi-prow-worker2 pod pod-subpath-test-dynamicpv-s7km container test-container-subpath-dynamicpv-s7km: <nil>
STEP: delete the pod
Aug  7 16:29:51.337: INFO: Waiting for pod pod-subpath-test-dynamicpv-s7km to disappear
Aug  7 16:29:51.341: INFO: Pod pod-subpath-test-dynamicpv-s7km no longer exists
STEP: Deleting pod
Aug  7 16:29:51.341: INFO: Deleting pod "pod-subpath-test-dynamicpv-s7km" in namespace "provisioning-4414"
... skipping 14 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support creating multiple subpath from same volumes [Slow]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:294
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]","total":-1,"completed":2,"skipped":228,"failed":0}
Aug  7 16:29:56.382: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral 
  should support multiple inline ephemeral volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:211
... skipping 38 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support multiple inline ephemeral volumes
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:211
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":2,"skipped":39,"failed":0}
Aug  7 16:29:59.793: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should support existing directories when readOnly specified in the volumeSource
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
... skipping 17 lines ...
Aug  7 16:26:33.553: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  7 16:26:33.561: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iovdlzs] to have phase Bound
Aug  7 16:26:33.563: INFO: PersistentVolumeClaim hostpath.csi.k8s.iovdlzs found but phase is Pending instead of Bound.
Aug  7 16:26:35.566: INFO: PersistentVolumeClaim hostpath.csi.k8s.iovdlzs found and phase=Bound (2.005579892s)
STEP: Creating pod pod-subpath-test-dynamicpv-ktgq
STEP: Creating a pod to test subpath
Aug  7 16:26:35.576: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-ktgq" in namespace "provisioning-372" to be "Succeeded or Failed"
Aug  7 16:26:35.580: INFO: Pod "pod-subpath-test-dynamicpv-ktgq": Phase="Pending", Reason="", readiness=false. Elapsed: 3.876559ms
Aug  7 16:26:37.585: INFO: Pod "pod-subpath-test-dynamicpv-ktgq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008503181s
Aug  7 16:26:39.588: INFO: Pod "pod-subpath-test-dynamicpv-ktgq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011943029s
Aug  7 16:26:41.592: INFO: Pod "pod-subpath-test-dynamicpv-ktgq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016039842s
Aug  7 16:26:43.597: INFO: Pod "pod-subpath-test-dynamicpv-ktgq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020868357s
Aug  7 16:26:45.602: INFO: Pod "pod-subpath-test-dynamicpv-ktgq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025598712s
... skipping 39 lines ...
Aug  7 16:28:05.813: INFO: Pod "pod-subpath-test-dynamicpv-ktgq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.236936732s
Aug  7 16:28:07.817: INFO: Pod "pod-subpath-test-dynamicpv-ktgq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.240967295s
Aug  7 16:28:09.823: INFO: Pod "pod-subpath-test-dynamicpv-ktgq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.246752013s
Aug  7 16:28:11.827: INFO: Pod "pod-subpath-test-dynamicpv-ktgq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.251053219s
Aug  7 16:28:13.832: INFO: Pod "pod-subpath-test-dynamicpv-ktgq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m38.255611375s
STEP: Saw pod success
Aug  7 16:28:13.832: INFO: Pod "pod-subpath-test-dynamicpv-ktgq" satisfied condition "Succeeded or Failed"
Aug  7 16:28:13.835: INFO: Trying to get logs from node csi-prow-worker2 pod pod-subpath-test-dynamicpv-ktgq container test-container-subpath-dynamicpv-ktgq: <nil>
STEP: delete the pod
Aug  7 16:28:13.860: INFO: Waiting for pod pod-subpath-test-dynamicpv-ktgq to disappear
Aug  7 16:28:13.864: INFO: Pod pod-subpath-test-dynamicpv-ktgq no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-ktgq
Aug  7 16:28:13.864: INFO: Deleting pod "pod-subpath-test-dynamicpv-ktgq" in namespace "provisioning-372"
STEP: Creating pod pod-subpath-test-dynamicpv-ktgq
STEP: Creating a pod to test subpath
Aug  7 16:28:13.878: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-ktgq" in namespace "provisioning-372" to be "Succeeded or Failed"
Aug  7 16:28:13.881: INFO: Pod "pod-subpath-test-dynamicpv-ktgq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.73415ms
Aug  7 16:28:15.885: INFO: Pod "pod-subpath-test-dynamicpv-ktgq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006774205s
Aug  7 16:28:17.889: INFO: Pod "pod-subpath-test-dynamicpv-ktgq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010646246s
Aug  7 16:28:19.893: INFO: Pod "pod-subpath-test-dynamicpv-ktgq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014348593s
Aug  7 16:28:21.897: INFO: Pod "pod-subpath-test-dynamicpv-ktgq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018499032s
Aug  7 16:28:23.900: INFO: Pod "pod-subpath-test-dynamicpv-ktgq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022028899s
... skipping 42 lines ...
Aug  7 16:29:50.092: INFO: Pod "pod-subpath-test-dynamicpv-ktgq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.213551669s
Aug  7 16:29:52.095: INFO: Pod "pod-subpath-test-dynamicpv-ktgq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.21715036s
Aug  7 16:29:54.099: INFO: Pod "pod-subpath-test-dynamicpv-ktgq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.220794558s
Aug  7 16:29:56.102: INFO: Pod "pod-subpath-test-dynamicpv-ktgq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.223508746s
Aug  7 16:29:58.107: INFO: Pod "pod-subpath-test-dynamicpv-ktgq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m44.22856279s
STEP: Saw pod success
Aug  7 16:29:58.107: INFO: Pod "pod-subpath-test-dynamicpv-ktgq" satisfied condition "Succeeded or Failed"
Aug  7 16:29:58.110: INFO: Trying to get logs from node csi-prow-worker2 pod pod-subpath-test-dynamicpv-ktgq container test-container-subpath-dynamicpv-ktgq: <nil>
STEP: delete the pod
Aug  7 16:29:58.129: INFO: Waiting for pod pod-subpath-test-dynamicpv-ktgq to disappear
Aug  7 16:29:58.134: INFO: Pod pod-subpath-test-dynamicpv-ktgq no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-ktgq
Aug  7 16:29:58.134: INFO: Deleting pod "pod-subpath-test-dynamicpv-ktgq" in namespace "provisioning-372"
... skipping 16 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support existing directories when readOnly specified in the volumeSource
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":1,"skipped":59,"failed":0}
Aug  7 16:30:03.176: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral 
  should create read/write inline ephemeral volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:161
... skipping 36 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should create read/write inline ephemeral volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:161
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":2,"skipped":72,"failed":0}
Aug  7 16:30:05.259: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand 
  should resize volume when PVC is edited while pod is using it
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
... skipping 44 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should resize volume when PVC is edited while pod is using it
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":1,"skipped":143,"failed":0}
Aug  7 16:30:07.025: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] 
  should concurrently access the single volume from pods on the same node [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:312
... skipping 96 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should concurrently access the single volume from pods on the same node [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:312
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node [LinuxOnly]","total":-1,"completed":1,"skipped":47,"failed":0}
Aug  7 16:30:12.224: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode 
  should not mount / map unused volumes in a pod [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
... skipping 49 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should not mount / map unused volumes in a pod [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":241,"failed":0}
Aug  7 16:30:15.126: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand 
  Verify if offline PVC expansion works
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
... skipping 51 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    Verify if offline PVC expansion works
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":1,"skipped":53,"failed":0}
Aug  7 16:30:18.263: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral 
  should support two pods which share the same volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
... skipping 35 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support two pods which share the same volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which share the same volume","total":-1,"completed":1,"skipped":3,"failed":0}
Aug  7 16:30:18.958: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should verify container cannot write to subpath readonly volumes [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:422
... skipping 14 lines ...
STEP: creating a claim
Aug  7 16:28:16.696: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  7 16:28:16.702: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iobrmqz] to have phase Bound
Aug  7 16:28:16.704: INFO: PersistentVolumeClaim hostpath.csi.k8s.iobrmqz found but phase is Pending instead of Bound.
Aug  7 16:28:18.708: INFO: PersistentVolumeClaim hostpath.csi.k8s.iobrmqz found and phase=Bound (2.006510704s)
STEP: Creating pod to format volume volume-prep-provisioning-1799
Aug  7 16:28:18.718: INFO: Waiting up to 5m0s for pod "volume-prep-provisioning-1799" in namespace "provisioning-1799" to be "Succeeded or Failed"
Aug  7 16:28:18.726: INFO: Pod "volume-prep-provisioning-1799": Phase="Pending", Reason="", readiness=false. Elapsed: 7.186531ms
Aug  7 16:28:20.730: INFO: Pod "volume-prep-provisioning-1799": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011110271s
Aug  7 16:28:22.734: INFO: Pod "volume-prep-provisioning-1799": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015864057s
Aug  7 16:28:24.737: INFO: Pod "volume-prep-provisioning-1799": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019036325s
Aug  7 16:28:26.741: INFO: Pod "volume-prep-provisioning-1799": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023014804s
Aug  7 16:28:28.746: INFO: Pod "volume-prep-provisioning-1799": Phase="Pending", Reason="", readiness=false. Elapsed: 10.027328191s
... skipping 37 lines ...
Aug  7 16:29:44.903: INFO: Pod "volume-prep-provisioning-1799": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.184670094s
Aug  7 16:29:46.908: INFO: Pod "volume-prep-provisioning-1799": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.189207909s
Aug  7 16:29:48.911: INFO: Pod "volume-prep-provisioning-1799": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.192543805s
Aug  7 16:29:50.914: INFO: Pod "volume-prep-provisioning-1799": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.195926774s
Aug  7 16:29:52.919: INFO: Pod "volume-prep-provisioning-1799": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m34.200515871s
STEP: Saw pod success
Aug  7 16:29:52.919: INFO: Pod "volume-prep-provisioning-1799" satisfied condition "Succeeded or Failed"
Aug  7 16:29:52.919: INFO: Deleting pod "volume-prep-provisioning-1799" in namespace "provisioning-1799"
Aug  7 16:29:52.929: INFO: Wait up to 5m0s for pod "volume-prep-provisioning-1799" to be fully deleted
STEP: Creating pod pod-subpath-test-dynamicpv-s5tk
STEP: Checking for subpath error in container status
Aug  7 16:30:14.948: INFO: Deleting pod "pod-subpath-test-dynamicpv-s5tk" in namespace "provisioning-1799"
Aug  7 16:30:14.958: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-s5tk" to be fully deleted
STEP: Deleting pod
Aug  7 16:30:14.963: INFO: Deleting pod "pod-subpath-test-dynamicpv-s5tk" in namespace "provisioning-1799"
STEP: Deleting pvc
Aug  7 16:30:14.965: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.iobrmqz"
... skipping 12 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should verify container cannot write to subpath readonly volumes [Slow]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:422
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]","total":-1,"completed":2,"skipped":116,"failed":0}
Aug  7 16:30:19.996: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand 
  Verify if offline PVC expansion works
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
... skipping 53 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    Verify if offline PVC expansion works
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":1,"skipped":253,"failed":0}
Aug  7 16:30:20.108: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] 
  should concurrently access the single volume from pods on the same node [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:312
... skipping 95 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should concurrently access the single volume from pods on the same node [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:312
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node [LinuxOnly]","total":-1,"completed":1,"skipped":174,"failed":0}
Aug  7 16:30:20.858: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] 
  should concurrently access the single read-only volume from pods on the same node
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:337
... skipping 59 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should concurrently access the single read-only volume from pods on the same node
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:337
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":-1,"completed":1,"skipped":124,"failed":0}
Aug  7 16:30:22.740: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] 
  should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:134
... skipping 123 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:134
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]","total":-1,"completed":1,"skipped":10,"failed":0}
Aug  7 16:30:23.648: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] 
  should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:134
... skipping 122 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:134
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]","total":-1,"completed":1,"skipped":348,"failed":0}
Aug  7 16:30:25.125: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] 
  should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:214
... skipping 123 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:214
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]","total":-1,"completed":1,"skipped":3,"failed":0}
Aug  7 16:30:28.818: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] volumes 
  should store data
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
... skipping 108 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should store data
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":1,"skipped":133,"failed":0}
Aug  7 16:30:31.383: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] 
  should concurrently access the single read-only volume from pods on the same node
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:337
... skipping 61 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should concurrently access the single read-only volume from pods on the same node
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:337
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":-1,"completed":1,"skipped":39,"failed":0}
Aug  7 16:30:32.595: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should support restarting containers using file as subpath [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:335
... skipping 37 lines ...
Aug  7 16:28:28.537: INFO: stderr: ""
Aug  7 16:28:28.537: INFO: stdout: ""
Aug  7 16:28:28.537: INFO: Pod exec output: 
STEP: Waiting for container to stop restarting
Aug  7 16:28:54.548: INFO: Container has restart count: 3
Aug  7 16:29:36.549: INFO: Container has restart count: 4
Aug  7 16:30:28.550: FAIL: while waiting for container to stabilize
Unexpected error:
    <*errors.errorString | 0xc000248250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 37 lines ...
Aug  7 16:30:37.594: INFO: At 2022-08-07 16:26:49 +0000 UTC - event for pod-subpath-test-dynamicpv-hltz: {kubelet csi-prow-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine
Aug  7 16:30:37.594: INFO: At 2022-08-07 16:26:49 +0000 UTC - event for pod-subpath-test-dynamicpv-hltz: {kubelet csi-prow-worker2} Created: Created container test-container-subpath-dynamicpv-hltz
Aug  7 16:30:37.594: INFO: At 2022-08-07 16:26:49 +0000 UTC - event for pod-subpath-test-dynamicpv-hltz: {kubelet csi-prow-worker2} Started: Started container test-container-subpath-dynamicpv-hltz
Aug  7 16:30:37.594: INFO: At 2022-08-07 16:26:49 +0000 UTC - event for pod-subpath-test-dynamicpv-hltz: {kubelet csi-prow-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine
Aug  7 16:30:37.594: INFO: At 2022-08-07 16:26:49 +0000 UTC - event for pod-subpath-test-dynamicpv-hltz: {kubelet csi-prow-worker2} Created: Created container test-container-volume-dynamicpv-hltz
Aug  7 16:30:37.594: INFO: At 2022-08-07 16:26:50 +0000 UTC - event for pod-subpath-test-dynamicpv-hltz: {kubelet csi-prow-worker2} Started: Started container test-container-volume-dynamicpv-hltz
Aug  7 16:30:37.594: INFO: At 2022-08-07 16:27:50 +0000 UTC - event for pod-subpath-test-dynamicpv-hltz: {kubelet csi-prow-worker2} Unhealthy: Liveness probe failed: cat: can't open '/probe-volume/probe-file': No such file or directory

Aug  7 16:30:37.594: INFO: At 2022-08-07 16:27:50 +0000 UTC - event for pod-subpath-test-dynamicpv-hltz: {kubelet csi-prow-worker2} Killing: Container test-container-subpath-dynamicpv-hltz failed liveness probe, will be restarted
Aug  7 16:30:37.594: INFO: At 2022-08-07 16:28:01 +0000 UTC - event for pod-subpath-test-dynamicpv-hltz: {kubelet csi-prow-worker2} BackOff: Back-off restarting failed container
Aug  7 16:30:37.599: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Aug  7 16:30:37.599: INFO: 
Aug  7 16:30:37.606: INFO: 
Logging node info for node csi-prow-control-plane
Aug  7 16:30:37.609: INFO: Node Info: &Node{ObjectMeta:{csi-prow-control-plane    86beba06-41b7-4930-b08d-aeb27f56c673 4259 0 2022-08-07 16:18:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:csi-prow-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-08-07 16:18:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-08-07 16:18:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-08-07 16:18:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/csi-prow/csi-prow-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259975987200 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67445997568 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259975987200 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67445997568 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-08-07 16:28:40 +0000 UTC,LastTransitionTime:2022-08-07 16:18:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-08-07 16:28:40 +0000 UTC,LastTransitionTime:2022-08-07 16:18:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-08-07 16:28:40 +0000 UTC,LastTransitionTime:2022-08-07 16:18:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-08-07 16:28:40 +0000 UTC,LastTransitionTime:2022-08-07 16:18:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:csi-prow-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2651913b4e99492eaec52e53e0bb21aa,SystemUUID:743e8327-6bd6-4d09-adc9-e0c61ae716c1,BootID:41e7026b-605e-4bf9-8f11-f850fd1f0bfb,KernelVersion:5.4.0-1068-gke,OSImage:Ubuntu 21.04,ContainerRuntimeVersion:containerd://1.5.2,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:132714699,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:126834637,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:121042741,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20210326-1e038dc5],SizeBytes:53960776,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:51865396,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:21086532,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:13367922,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:12945155,},ContainerImage{Names:[k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Aug  7 16:30:37.609: INFO: 
... skipping 86 lines ...
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support restarting containers using file as subpath [Slow][LinuxOnly] [It]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:335

    Aug  7 16:30:28.551: while waiting for container to stabilize
    Unexpected error:
        <*errors.errorString | 0xc000248250>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred

... skipping 107 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should store data
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":2,"skipped":336,"failed":0}
Aug  7 16:30:42.314: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral 
  should support two pods which share the same volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
... skipping 45 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support two pods which share the same volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which share the same volume","total":-1,"completed":1,"skipped":459,"failed":0}
Aug  7 16:30:56.909: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral 
  should support two pods which share the same volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
... skipping 47 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support two pods which share the same volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which share the same volume","total":-1,"completed":1,"skipped":21,"failed":0}
Aug  7 16:30:57.177: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should support restarting containers using directory as subpath [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:320
... skipping 36 lines ...
Aug  7 16:28:58.570: INFO: stderr: ""
Aug  7 16:28:58.570: INFO: stdout: ""
Aug  7 16:28:58.570: INFO: Pod exec output: 
STEP: Waiting for container to stop restarting
Aug  7 16:29:34.578: INFO: Container has restart count: 4
Aug  7 16:30:08.578: INFO: Container has restart count: 5
Aug  7 16:30:58.582: FAIL: while waiting for container to stabilize
Unexpected error:
    <*errors.errorString | 0xc000250250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 37 lines ...
Aug  7 16:31:11.622: INFO: At 2022-08-07 16:27:32 +0000 UTC - event for pod-subpath-test-dynamicpv-6w29: {kubelet csi-prow-worker2} Started: Started container init-volume-dynamicpv-6w29
Aug  7 16:31:11.622: INFO: At 2022-08-07 16:27:32 +0000 UTC - event for pod-subpath-test-dynamicpv-6w29: {kubelet csi-prow-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine
Aug  7 16:31:11.622: INFO: At 2022-08-07 16:27:33 +0000 UTC - event for pod-subpath-test-dynamicpv-6w29: {kubelet csi-prow-worker2} Started: Started container test-container-subpath-dynamicpv-6w29
Aug  7 16:31:11.622: INFO: At 2022-08-07 16:27:33 +0000 UTC - event for pod-subpath-test-dynamicpv-6w29: {kubelet csi-prow-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine
Aug  7 16:31:11.622: INFO: At 2022-08-07 16:27:33 +0000 UTC - event for pod-subpath-test-dynamicpv-6w29: {kubelet csi-prow-worker2} Created: Created container test-container-volume-dynamicpv-6w29
Aug  7 16:31:11.622: INFO: At 2022-08-07 16:27:33 +0000 UTC - event for pod-subpath-test-dynamicpv-6w29: {kubelet csi-prow-worker2} Started: Started container test-container-volume-dynamicpv-6w29
Aug  7 16:31:11.622: INFO: At 2022-08-07 16:28:20 +0000 UTC - event for pod-subpath-test-dynamicpv-6w29: {kubelet csi-prow-worker2} Unhealthy: Liveness probe failed: cat: can't open '/probe-volume/probe-file': No such file or directory

Aug  7 16:31:11.622: INFO: At 2022-08-07 16:28:20 +0000 UTC - event for pod-subpath-test-dynamicpv-6w29: {kubelet csi-prow-worker2} Killing: Container test-container-subpath-dynamicpv-6w29 failed liveness probe, will be restarted
Aug  7 16:31:11.622: INFO: At 2022-08-07 16:28:32 +0000 UTC - event for pod-subpath-test-dynamicpv-6w29: {kubelet csi-prow-worker2} BackOff: Back-off restarting failed container
Aug  7 16:31:11.625: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Aug  7 16:31:11.625: INFO: 
Aug  7 16:31:11.628: INFO: 
Logging node info for node csi-prow-control-plane
Aug  7 16:31:11.630: INFO: Node Info: &Node{ObjectMeta:{csi-prow-control-plane    86beba06-41b7-4930-b08d-aeb27f56c673 4259 0 2022-08-07 16:18:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:csi-prow-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-08-07 16:18:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-08-07 16:18:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-08-07 16:18:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/csi-prow/csi-prow-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259975987200 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67445997568 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259975987200 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67445997568 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-08-07 16:28:40 +0000 UTC,LastTransitionTime:2022-08-07 16:18:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-08-07 16:28:40 +0000 UTC,LastTransitionTime:2022-08-07 16:18:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-08-07 16:28:40 +0000 UTC,LastTransitionTime:2022-08-07 16:18:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-08-07 16:28:40 +0000 UTC,LastTransitionTime:2022-08-07 16:18:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:csi-prow-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2651913b4e99492eaec52e53e0bb21aa,SystemUUID:743e8327-6bd6-4d09-adc9-e0c61ae716c1,BootID:41e7026b-605e-4bf9-8f11-f850fd1f0bfb,KernelVersion:5.4.0-1068-gke,OSImage:Ubuntu 21.04,ContainerRuntimeVersion:containerd://1.5.2,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:132714699,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:126834637,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:121042741,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20210326-1e038dc5],SizeBytes:53960776,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:51865396,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:21086532,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:13367922,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:12945155,},ContainerImage{Names:[k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Aug  7 16:31:11.631: INFO: 
... skipping 78 lines ...
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support restarting containers using directory as subpath [Slow] [It]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:320

    Aug  7 16:30:58.582: while waiting for container to stabilize
    Unexpected error:
        <*errors.errorString | 0xc000250250>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:870
------------------------------
{"msg":"FAILED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]","total":-1,"completed":0,"skipped":221,"failed":1,"failures":["External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]"]}
Aug  7 16:31:12.060: INFO: Running AfterSuite actions on all nodes


{"msg":"FAILED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]","total":-1,"completed":0,"skipped":9,"failed":1,"failures":["External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]"]}
Aug  7 16:30:38.157: INFO: Running AfterSuite actions on all nodes
Aug  7 16:31:12.113: INFO: Running AfterSuite actions on node 1
Aug  7 16:31:12.113: INFO: Dumping logs locally to: /logs/artifacts
Aug  7 16:31:12.114: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec ../../cluster/log-dump/log-dump.sh: no such file or directory



Summarizing 2 Failures:

[Fail] External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath [It] should support restarting containers using file as subpath [Slow][LinuxOnly] 
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:870

[Fail] External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath [It] should support restarting containers using directory as subpath [Slow] 
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:870

Ran 48 of 5976 Specs in 283.491 seconds
FAIL! -- 46 Passed | 2 Failed | 0 Pending | 5928 Skipped


Ginkgo ran 1 suite in 5m1.066648558s
Test Suite Failed
Sun Aug  7 16:31:12 UTC 2022 go1.18 /home/prow/go/src/k8s.io/kubernetes$ go run /home/prow/go/src/github.com/kubernetes-csi/csi-driver-host-path/release-tools/filter-junit.go -t=External.Storage|CSI.mock.volume -o /logs/artifacts/junit_parallel.xml /logs/artifacts/junit_01.xml /logs/artifacts/junit_02.xml /logs/artifacts/junit_03.xml /logs/artifacts/junit_04.xml /logs/artifacts/junit_05.xml /logs/artifacts/junit_06.xml /logs/artifacts/junit_07.xml /logs/artifacts/junit_08.xml /logs/artifacts/junit_09.xml /logs/artifacts/junit_10.xml /logs/artifacts/junit_11.xml /logs/artifacts/junit_12.xml /logs/artifacts/junit_13.xml /logs/artifacts/junit_14.xml /logs/artifacts/junit_15.xml /logs/artifacts/junit_16.xml /logs/artifacts/junit_17.xml /logs/artifacts/junit_18.xml /logs/artifacts/junit_19.xml /logs/artifacts/junit_20.xml /logs/artifacts/junit_21.xml /logs/artifacts/junit_22.xml /logs/artifacts/junit_23.xml /logs/artifacts/junit_24.xml /logs/artifacts/junit_25.xml /logs/artifacts/junit_26.xml /logs/artifacts/junit_27.xml /logs/artifacts/junit_28.xml /logs/artifacts/junit_29.xml /logs/artifacts/junit_30.xml /logs/artifacts/junit_31.xml /logs/artifacts/junit_32.xml /logs/artifacts/junit_33.xml /logs/artifacts/junit_34.xml /logs/artifacts/junit_35.xml /logs/artifacts/junit_36.xml /logs/artifacts/junit_37.xml /logs/artifacts/junit_38.xml /logs/artifacts/junit_39.xml /logs/artifacts/junit_40.xml
go: updates to go.mod needed, disabled by -mod=vendor
	(Go version in go.mod is at least 1.14 and vendor directory exists.)
	to update it:
	go mod tidy
go: updates to go.mod needed, disabled by -mod=vendor
	(Go version in go.mod is at least 1.14 and vendor directory exists.)
	to update it:
	go mod tidy
WARNING: E2E parallel failed
Sun Aug  7 16:31:12 UTC 2022 go1.19 /home/prow/go/src/k8s.io/kubernetes$ env KUBECONFIG=/root/.kube/config KUBE_TEST_REPO_LIST=/home/prow/go/pkg/csiprow.RqsVrLN1c4/e2e-repo-list ginkgo -v -p -nodes 40 -focus=External.Storage.*(\[Feature:VolumeSnapshotDataSource\]) -skip=\[Serial\]|\[Disruptive\] /home/prow/go/pkg/csiprow.RqsVrLN1c4/e2e.test -- -report-dir /logs/artifacts -storage.testdriver=/home/prow/go/pkg/csiprow.RqsVrLN1c4/test-driver.yaml
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1659889873 - Will randomize all specs
Will run 5976 specs

... skipping 408 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] provisioning
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:200
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]","total":-1,"completed":1,"skipped":107,"failed":0}
Aug  7 16:32:36.418: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] provisioning 
  should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:200
... skipping 104 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] provisioning
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:200
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]","total":-1,"completed":1,"skipped":43,"failed":0}
Aug  7 16:32:50.016: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  
  should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
... skipping 16 lines ...
STEP: creating a claim
Aug  7 16:31:38.367: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  7 16:31:38.380: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.ioz7qn7] to have phase Bound
Aug  7 16:31:38.385: INFO: PersistentVolumeClaim hostpath.csi.k8s.ioz7qn7 found but phase is Pending instead of Bound.
Aug  7 16:31:40.390: INFO: PersistentVolumeClaim hostpath.csi.k8s.ioz7qn7 found and phase=Bound (2.009771126s)
STEP: [init] starting a pod to use the claim
Aug  7 16:31:40.400: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-nk7tt" in namespace "snapshotting-3741" to be "Succeeded or Failed"
Aug  7 16:31:40.404: INFO: Pod "pvc-snapshottable-tester-nk7tt": Phase="Pending", Reason="", readiness=false. Elapsed: 3.552891ms
Aug  7 16:31:42.408: INFO: Pod "pvc-snapshottable-tester-nk7tt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008163845s
Aug  7 16:31:44.412: INFO: Pod "pvc-snapshottable-tester-nk7tt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011910376s
Aug  7 16:31:46.415: INFO: Pod "pvc-snapshottable-tester-nk7tt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015106819s
Aug  7 16:31:48.420: INFO: Pod "pvc-snapshottable-tester-nk7tt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01937863s
Aug  7 16:31:50.427: INFO: Pod "pvc-snapshottable-tester-nk7tt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.027140837s
Aug  7 16:31:52.432: INFO: Pod "pvc-snapshottable-tester-nk7tt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.031672209s
STEP: Saw pod success
Aug  7 16:31:52.432: INFO: Pod "pvc-snapshottable-tester-nk7tt" satisfied condition "Succeeded or Failed"
Aug  7 16:31:52.440: INFO: Pod pvc-snapshottable-tester-nk7tt has the following logs: 
Aug  7 16:31:52.440: INFO: Deleting pod "pvc-snapshottable-tester-nk7tt" in namespace "snapshotting-3741"
Aug  7 16:31:52.449: INFO: Wait up to 5m0s for pod "pvc-snapshottable-tester-nk7tt" to be fully deleted
Aug  7 16:31:52.453: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.ioz7qn7] to have phase Bound
Aug  7 16:31:52.456: INFO: PersistentVolumeClaim hostpath.csi.k8s.ioz7qn7 found and phase=Bound (3.338936ms)
STEP: [init] checking the claim
... skipping 11 lines ...
[It] should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
STEP: checking the snapshot
STEP: checking the SnapshotContent
STEP: Modifying source data test
STEP: modifying the data in the source PVC
Aug  7 16:31:54.510: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-bcdhw" in namespace "snapshotting-3741" to be "Succeeded or Failed"
Aug  7 16:31:54.513: INFO: Pod "pvc-snapshottable-data-tester-bcdhw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.785606ms
Aug  7 16:31:56.517: INFO: Pod "pvc-snapshottable-data-tester-bcdhw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007196278s
Aug  7 16:31:58.521: INFO: Pod "pvc-snapshottable-data-tester-bcdhw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010863465s
Aug  7 16:32:00.525: INFO: Pod "pvc-snapshottable-data-tester-bcdhw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014765503s
Aug  7 16:32:02.529: INFO: Pod "pvc-snapshottable-data-tester-bcdhw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018974668s
Aug  7 16:32:04.533: INFO: Pod "pvc-snapshottable-data-tester-bcdhw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023595512s
Aug  7 16:32:06.537: INFO: Pod "pvc-snapshottable-data-tester-bcdhw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.027293117s
STEP: Saw pod success
Aug  7 16:32:06.537: INFO: Pod "pvc-snapshottable-data-tester-bcdhw" satisfied condition "Succeeded or Failed"
Aug  7 16:32:06.546: INFO: Pod pvc-snapshottable-data-tester-bcdhw has the following logs: 
Aug  7 16:32:06.546: INFO: Deleting pod "pvc-snapshottable-data-tester-bcdhw" in namespace "snapshotting-3741"
Aug  7 16:32:06.556: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-bcdhw" to be fully deleted
STEP: creating a pvc from the snapshot
STEP: starting a pod to use the claim
Aug  7 16:32:28.578: INFO: Running '/usr/local/bin/kubectl --server=https://127.0.0.1:43009 --kubeconfig=/root/.kube/config --namespace=snapshotting-3741 exec restored-pvc-tester-t4psh --namespace=snapshotting-3741 -- cat /mnt/test/data'
... skipping 42 lines ...
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:108
      
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:225
        should check snapshot fields, check restore correctly works after modifying source data, check deletion
        /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  should check snapshot fields, check restore correctly works after modifying source data, check deletion","total":-1,"completed":1,"skipped":10,"failed":0}
Aug  7 16:33:07.841: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  
  should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
... skipping 16 lines ...
STEP: creating a claim
Aug  7 16:31:38.306: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  7 16:31:38.358: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io8jfxd] to have phase Bound
Aug  7 16:31:38.364: INFO: PersistentVolumeClaim hostpath.csi.k8s.io8jfxd found but phase is Pending instead of Bound.
Aug  7 16:31:40.367: INFO: PersistentVolumeClaim hostpath.csi.k8s.io8jfxd found and phase=Bound (2.009654064s)
STEP: [init] starting a pod to use the claim
Aug  7 16:31:40.377: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-xsnp4" in namespace "snapshotting-8907" to be "Succeeded or Failed"
Aug  7 16:31:40.383: INFO: Pod "pvc-snapshottable-tester-xsnp4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.428564ms
Aug  7 16:31:42.388: INFO: Pod "pvc-snapshottable-tester-xsnp4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010137926s
Aug  7 16:31:44.393: INFO: Pod "pvc-snapshottable-tester-xsnp4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014673374s
Aug  7 16:31:46.397: INFO: Pod "pvc-snapshottable-tester-xsnp4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018969486s
Aug  7 16:31:48.403: INFO: Pod "pvc-snapshottable-tester-xsnp4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.025236825s
Aug  7 16:31:50.407: INFO: Pod "pvc-snapshottable-tester-xsnp4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.029399367s
STEP: Saw pod success
Aug  7 16:31:50.407: INFO: Pod "pvc-snapshottable-tester-xsnp4" satisfied condition "Succeeded or Failed"
Aug  7 16:31:50.427: INFO: Pod pvc-snapshottable-tester-xsnp4 has the following logs: 
Aug  7 16:31:50.427: INFO: Deleting pod "pvc-snapshottable-tester-xsnp4" in namespace "snapshotting-8907"
Aug  7 16:31:50.437: INFO: Wait up to 5m0s for pod "pvc-snapshottable-tester-xsnp4" to be fully deleted
Aug  7 16:31:50.441: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io8jfxd] to have phase Bound
Aug  7 16:31:50.443: INFO: PersistentVolumeClaim hostpath.csi.k8s.io8jfxd found and phase=Bound (2.645482ms)
STEP: [init] checking the claim
... skipping 31 lines ...
[It] should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
STEP: checking the snapshot
STEP: checking the SnapshotContent
STEP: Modifying source data test
STEP: modifying the data in the source PVC
Aug  7 16:31:56.601: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-pqkv9" in namespace "snapshotting-8907" to be "Succeeded or Failed"
Aug  7 16:31:56.609: INFO: Pod "pvc-snapshottable-data-tester-pqkv9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.612601ms
Aug  7 16:31:58.613: INFO: Pod "pvc-snapshottable-data-tester-pqkv9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012620655s
Aug  7 16:32:00.617: INFO: Pod "pvc-snapshottable-data-tester-pqkv9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016525324s
Aug  7 16:32:02.622: INFO: Pod "pvc-snapshottable-data-tester-pqkv9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02158889s
Aug  7 16:32:04.626: INFO: Pod "pvc-snapshottable-data-tester-pqkv9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.025146401s
Aug  7 16:32:06.632: INFO: Pod "pvc-snapshottable-data-tester-pqkv9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.030650382s
STEP: Saw pod success
Aug  7 16:32:06.632: INFO: Pod "pvc-snapshottable-data-tester-pqkv9" satisfied condition "Succeeded or Failed"
Aug  7 16:32:06.641: INFO: Pod pvc-snapshottable-data-tester-pqkv9 has the following logs: 
Aug  7 16:32:06.641: INFO: Deleting pod "pvc-snapshottable-data-tester-pqkv9" in namespace "snapshotting-8907"
Aug  7 16:32:06.653: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-pqkv9" to be fully deleted
STEP: creating a pvc from the snapshot
STEP: starting a pod to use the claim
Aug  7 16:32:30.678: INFO: Running '/usr/local/bin/kubectl --server=https://127.0.0.1:43009 --kubeconfig=/root/.kube/config --namespace=snapshotting-8907 exec restored-pvc-tester-pn8db --namespace=snapshotting-8907 -- cat /mnt/test/data'
... skipping 42 lines ...
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:108
      
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:225
        should check snapshot fields, check restore correctly works after modifying source data, check deletion
        /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  should check snapshot fields, check restore correctly works after modifying source data, check deletion","total":-1,"completed":1,"skipped":59,"failed":0}
Aug  7 16:33:21.936: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  
  should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
... skipping 16 lines ...
STEP: creating a claim
Aug  7 16:31:37.332: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  7 16:31:37.377: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iob7r9s] to have phase Bound
Aug  7 16:31:37.399: INFO: PersistentVolumeClaim hostpath.csi.k8s.iob7r9s found but phase is Pending instead of Bound.
Aug  7 16:31:39.402: INFO: PersistentVolumeClaim hostpath.csi.k8s.iob7r9s found and phase=Bound (2.024925035s)
STEP: [init] starting a pod to use the claim
Aug  7 16:31:39.411: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-29r5t" in namespace "snapshotting-5765" to be "Succeeded or Failed"
Aug  7 16:31:39.414: INFO: Pod "pvc-snapshottable-tester-29r5t": Phase="Pending", Reason="", readiness=false. Elapsed: 3.19275ms
Aug  7 16:31:41.418: INFO: Pod "pvc-snapshottable-tester-29r5t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006914009s
Aug  7 16:31:43.423: INFO: Pod "pvc-snapshottable-tester-29r5t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012261552s
Aug  7 16:31:45.429: INFO: Pod "pvc-snapshottable-tester-29r5t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018556721s
Aug  7 16:31:47.433: INFO: Pod "pvc-snapshottable-tester-29r5t": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022706474s
Aug  7 16:31:49.437: INFO: Pod "pvc-snapshottable-tester-29r5t": Phase="Pending", Reason="", readiness=false. Elapsed: 10.026640629s
Aug  7 16:31:51.442: INFO: Pod "pvc-snapshottable-tester-29r5t": Phase="Pending", Reason="", readiness=false. Elapsed: 12.030876811s
Aug  7 16:31:53.445: INFO: Pod "pvc-snapshottable-tester-29r5t": Phase="Pending", Reason="", readiness=false. Elapsed: 14.034620399s
Aug  7 16:31:55.449: INFO: Pod "pvc-snapshottable-tester-29r5t": Phase="Pending", Reason="", readiness=false. Elapsed: 16.038609702s
Aug  7 16:31:57.453: INFO: Pod "pvc-snapshottable-tester-29r5t": Phase="Pending", Reason="", readiness=false. Elapsed: 18.042408938s
Aug  7 16:31:59.457: INFO: Pod "pvc-snapshottable-tester-29r5t": Phase="Pending", Reason="", readiness=false. Elapsed: 20.046337588s
Aug  7 16:32:01.460: INFO: Pod "pvc-snapshottable-tester-29r5t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.049731786s
STEP: Saw pod success
Aug  7 16:32:01.461: INFO: Pod "pvc-snapshottable-tester-29r5t" satisfied condition "Succeeded or Failed"
Aug  7 16:32:01.468: INFO: Pod pvc-snapshottable-tester-29r5t has the following logs: 
Aug  7 16:32:01.468: INFO: Deleting pod "pvc-snapshottable-tester-29r5t" in namespace "snapshotting-5765"
Aug  7 16:32:01.476: INFO: Wait up to 5m0s for pod "pvc-snapshottable-tester-29r5t" to be fully deleted
Aug  7 16:32:01.479: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iob7r9s] to have phase Bound
Aug  7 16:32:01.481: INFO: PersistentVolumeClaim hostpath.csi.k8s.iob7r9s found and phase=Bound (2.254621ms)
STEP: [init] checking the claim
... skipping 31 lines ...
[It] should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
STEP: checking the snapshot
STEP: checking the SnapshotContent
STEP: Modifying source data test
STEP: modifying the data in the source PVC
Aug  7 16:32:07.601: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-9z5nc" in namespace "snapshotting-5765" to be "Succeeded or Failed"
Aug  7 16:32:07.603: INFO: Pod "pvc-snapshottable-data-tester-9z5nc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.840281ms
Aug  7 16:32:09.608: INFO: Pod "pvc-snapshottable-data-tester-9z5nc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006911319s
Aug  7 16:32:11.612: INFO: Pod "pvc-snapshottable-data-tester-9z5nc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010919683s
Aug  7 16:32:13.616: INFO: Pod "pvc-snapshottable-data-tester-9z5nc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015586727s
STEP: Saw pod success
Aug  7 16:32:13.616: INFO: Pod "pvc-snapshottable-data-tester-9z5nc" satisfied condition "Succeeded or Failed"
Aug  7 16:32:13.626: INFO: Pod pvc-snapshottable-data-tester-9z5nc has the following logs: 
Aug  7 16:32:13.627: INFO: Deleting pod "pvc-snapshottable-data-tester-9z5nc" in namespace "snapshotting-5765"
Aug  7 16:32:13.643: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-9z5nc" to be fully deleted
STEP: creating a pvc from the snapshot
STEP: starting a pod to use the claim
Aug  7 16:32:25.683: INFO: Running '/usr/local/bin/kubectl --server=https://127.0.0.1:43009 --kubeconfig=/root/.kube/config --namespace=snapshotting-5765 exec restored-pvc-tester-ldlkm --namespace=snapshotting-5765 -- cat /mnt/test/data'
... skipping 33 lines ...
Aug  7 16:32:50.003: INFO: volumesnapshotcontents pre-provisioned-snapcontent-d99a80e7-29f2-47b1-902c-021e29d4cff5 has been found and is not deleted
Aug  7 16:32:51.008: INFO: volumesnapshotcontents pre-provisioned-snapcontent-d99a80e7-29f2-47b1-902c-021e29d4cff5 has been found and is not deleted
Aug  7 16:32:52.013: INFO: volumesnapshotcontents pre-provisioned-snapcontent-d99a80e7-29f2-47b1-902c-021e29d4cff5 has been found and is not deleted
Aug  7 16:32:53.017: INFO: volumesnapshotcontents pre-provisioned-snapcontent-d99a80e7-29f2-47b1-902c-021e29d4cff5 has been found and is not deleted
Aug  7 16:32:54.022: INFO: volumesnapshotcontents pre-provisioned-snapcontent-d99a80e7-29f2-47b1-902c-021e29d4cff5 has been found and is not deleted
Aug  7 16:32:55.030: INFO: volumesnapshotcontents pre-provisioned-snapcontent-d99a80e7-29f2-47b1-902c-021e29d4cff5 has been found and is not deleted
Aug  7 16:32:56.030: INFO: WaitUntil failed after reaching the timeout 30s
[AfterEach] volume snapshot controller
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:221
Aug  7 16:32:56.036: INFO: Pod restored-pvc-tester-ldlkm has the following logs: 
Aug  7 16:32:56.036: INFO: Deleting pod "restored-pvc-tester-ldlkm" in namespace "snapshotting-5765"
Aug  7 16:32:56.043: INFO: Wait up to 5m0s for pod "restored-pvc-tester-ldlkm" to be fully deleted
Aug  7 16:33:36.060: INFO: deleting claim "snapshotting-5765"/"pvc-d59cm"
... skipping 28 lines ...
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:108
      
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:225
        should check snapshot fields, check restore correctly works after modifying source data, check deletion
        /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  should check snapshot fields, check restore correctly works after modifying source data, check deletion","total":-1,"completed":1,"skipped":58,"failed":0}
Aug  7 16:33:43.126: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  
  should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
... skipping 18 lines ...
Aug  7 16:31:36.974: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io6x6qf] to have phase Bound
Aug  7 16:31:37.011: INFO: PersistentVolumeClaim hostpath.csi.k8s.io6x6qf found but phase is Pending instead of Bound.
Aug  7 16:31:39.014: INFO: PersistentVolumeClaim hostpath.csi.k8s.io6x6qf found but phase is Pending instead of Bound.
Aug  7 16:31:41.018: INFO: PersistentVolumeClaim hostpath.csi.k8s.io6x6qf found but phase is Pending instead of Bound.
Aug  7 16:31:43.023: INFO: PersistentVolumeClaim hostpath.csi.k8s.io6x6qf found and phase=Bound (6.048546102s)
STEP: [init] starting a pod to use the claim
Aug  7 16:31:43.035: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-w28q2" in namespace "snapshotting-6014" to be "Succeeded or Failed"
Aug  7 16:31:43.040: INFO: Pod "pvc-snapshottable-tester-w28q2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.876069ms
Aug  7 16:31:45.044: INFO: Pod "pvc-snapshottable-tester-w28q2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008833493s
Aug  7 16:31:47.048: INFO: Pod "pvc-snapshottable-tester-w28q2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012725768s
Aug  7 16:31:49.053: INFO: Pod "pvc-snapshottable-tester-w28q2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01782869s
Aug  7 16:31:51.058: INFO: Pod "pvc-snapshottable-tester-w28q2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.02273099s
Aug  7 16:31:53.063: INFO: Pod "pvc-snapshottable-tester-w28q2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.027373875s
STEP: Saw pod success
Aug  7 16:31:53.063: INFO: Pod "pvc-snapshottable-tester-w28q2" satisfied condition "Succeeded or Failed"
Aug  7 16:31:53.070: INFO: Pod pvc-snapshottable-tester-w28q2 has the following logs: 
Aug  7 16:31:53.070: INFO: Deleting pod "pvc-snapshottable-tester-w28q2" in namespace "snapshotting-6014"
Aug  7 16:31:53.085: INFO: Wait up to 5m0s for pod "pvc-snapshottable-tester-w28q2" to be fully deleted
Aug  7 16:31:53.088: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io6x6qf] to have phase Bound
Aug  7 16:31:53.091: INFO: PersistentVolumeClaim hostpath.csi.k8s.io6x6qf found and phase=Bound (2.878447ms)
STEP: [init] checking the claim
... skipping 12 lines ...
[It] should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
STEP: checking the snapshot
STEP: checking the SnapshotContent
STEP: Modifying source data test
STEP: modifying the data in the source PVC
Aug  7 16:31:57.137: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-9h8tq" in namespace "snapshotting-6014" to be "Succeeded or Failed"
Aug  7 16:31:57.140: INFO: Pod "pvc-snapshottable-data-tester-9h8tq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.798457ms
Aug  7 16:31:59.145: INFO: Pod "pvc-snapshottable-data-tester-9h8tq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007215417s
Aug  7 16:32:01.149: INFO: Pod "pvc-snapshottable-data-tester-9h8tq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011846918s
Aug  7 16:32:03.154: INFO: Pod "pvc-snapshottable-data-tester-9h8tq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016611928s
Aug  7 16:32:05.157: INFO: Pod "pvc-snapshottable-data-tester-9h8tq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019969746s
Aug  7 16:32:07.162: INFO: Pod "pvc-snapshottable-data-tester-9h8tq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024006772s
Aug  7 16:32:09.171: INFO: Pod "pvc-snapshottable-data-tester-9h8tq": Phase="Pending", Reason="", readiness=false. Elapsed: 12.033228776s
Aug  7 16:32:11.176: INFO: Pod "pvc-snapshottable-data-tester-9h8tq": Phase="Pending", Reason="", readiness=false. Elapsed: 14.038060204s
Aug  7 16:32:13.181: INFO: Pod "pvc-snapshottable-data-tester-9h8tq": Phase="Pending", Reason="", readiness=false. Elapsed: 16.043597273s
Aug  7 16:32:15.186: INFO: Pod "pvc-snapshottable-data-tester-9h8tq": Phase="Pending", Reason="", readiness=false. Elapsed: 18.048380529s
Aug  7 16:32:17.190: INFO: Pod "pvc-snapshottable-data-tester-9h8tq": Phase="Pending", Reason="", readiness=false. Elapsed: 20.052349584s
Aug  7 16:32:19.194: INFO: Pod "pvc-snapshottable-data-tester-9h8tq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.056881831s
STEP: Saw pod success
Aug  7 16:32:19.194: INFO: Pod "pvc-snapshottable-data-tester-9h8tq" satisfied condition "Succeeded or Failed"
Aug  7 16:32:19.202: INFO: Pod pvc-snapshottable-data-tester-9h8tq has the following logs: 
Aug  7 16:32:19.202: INFO: Deleting pod "pvc-snapshottable-data-tester-9h8tq" in namespace "snapshotting-6014"
Aug  7 16:32:19.213: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-9h8tq" to be fully deleted
STEP: creating a pvc from the snapshot
STEP: starting a pod to use the claim
Aug  7 16:32:33.248: INFO: Running '/usr/local/bin/kubectl --server=https://127.0.0.1:43009 --kubeconfig=/root/.kube/config --namespace=snapshotting-6014 exec restored-pvc-tester-ksl24 --namespace=snapshotting-6014 -- cat /mnt/test/data'
... skipping 33 lines ...
Aug  7 16:32:57.548: INFO: volumesnapshotcontents snapcontent-3d4184d6-51af-486f-adb6-245bcd4e9a19 has been found and is not deleted
Aug  7 16:32:58.553: INFO: volumesnapshotcontents snapcontent-3d4184d6-51af-486f-adb6-245bcd4e9a19 has been found and is not deleted
Aug  7 16:32:59.558: INFO: volumesnapshotcontents snapcontent-3d4184d6-51af-486f-adb6-245bcd4e9a19 has been found and is not deleted
Aug  7 16:33:00.563: INFO: volumesnapshotcontents snapcontent-3d4184d6-51af-486f-adb6-245bcd4e9a19 has been found and is not deleted
Aug  7 16:33:01.567: INFO: volumesnapshotcontents snapcontent-3d4184d6-51af-486f-adb6-245bcd4e9a19 has been found and is not deleted
Aug  7 16:33:02.574: INFO: volumesnapshotcontents snapcontent-3d4184d6-51af-486f-adb6-245bcd4e9a19 has been found and is not deleted
Aug  7 16:33:03.575: INFO: WaitUntil failed after reaching the timeout 30s
[AfterEach] volume snapshot controller
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:221
Aug  7 16:33:03.580: INFO: Pod restored-pvc-tester-ksl24 has the following logs: 
Aug  7 16:33:03.580: INFO: Deleting pod "restored-pvc-tester-ksl24" in namespace "snapshotting-6014"
Aug  7 16:33:03.585: INFO: Wait up to 5m0s for pod "restored-pvc-tester-ksl24" to be fully deleted
Aug  7 16:33:47.596: INFO: deleting claim "snapshotting-6014"/"pvc-vw6q6"
... skipping 28 lines ...
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:108
      
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:225
        should check snapshot fields, check restore correctly works after modifying source data, check deletion
        /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  should check snapshot fields, check restore correctly works after modifying source data, check deletion","total":-1,"completed":1,"skipped":198,"failed":0}
Aug  7 16:33:54.674: INFO: Running AfterSuite actions on all nodes


Aug  7 16:31:38.199: INFO: Running AfterSuite actions on all nodes
Aug  7 16:33:54.724: INFO: Running AfterSuite actions on node 1
Aug  7 16:33:54.725: INFO: Dumping logs locally to: /logs/artifacts
Aug  7 16:33:54.725: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec ../../cluster/log-dump/log-dump.sh: no such file or directory


Ran 6 of 5976 Specs in 141.910 seconds
SUCCESS! -- 6 Passed | 0 Failed | 0 Pending | 5970 Skipped


Ginkgo ran 1 suite in 2m41.725073746s
Test Suite Passed
Sun Aug  7 16:33:54 UTC 2022 go1.18 /home/prow/go/src/k8s.io/kubernetes$ go run /home/prow/go/src/github.com/kubernetes-csi/csi-driver-host-path/release-tools/filter-junit.go -t=External.Storage|CSI.mock.volume -o /logs/artifacts/junit_parallel-features.xml /logs/artifacts/junit_01.xml /logs/artifacts/junit_02.xml /logs/artifacts/junit_03.xml /logs/artifacts/junit_04.xml /logs/artifacts/junit_05.xml /logs/artifacts/junit_06.xml /logs/artifacts/junit_07.xml /logs/artifacts/junit_08.xml /logs/artifacts/junit_09.xml /logs/artifacts/junit_10.xml /logs/artifacts/junit_11.xml /logs/artifacts/junit_12.xml /logs/artifacts/junit_13.xml /logs/artifacts/junit_14.xml /logs/artifacts/junit_15.xml /logs/artifacts/junit_16.xml /logs/artifacts/junit_17.xml /logs/artifacts/junit_18.xml /logs/artifacts/junit_19.xml /logs/artifacts/junit_20.xml /logs/artifacts/junit_21.xml /logs/artifacts/junit_22.xml /logs/artifacts/junit_23.xml /logs/artifacts/junit_24.xml /logs/artifacts/junit_25.xml /logs/artifacts/junit_26.xml /logs/artifacts/junit_27.xml /logs/artifacts/junit_28.xml /logs/artifacts/junit_29.xml /logs/artifacts/junit_30.xml /logs/artifacts/junit_31.xml /logs/artifacts/junit_32.xml /logs/artifacts/junit_33.xml /logs/artifacts/junit_34.xml /logs/artifacts/junit_35.xml /logs/artifacts/junit_36.xml /logs/artifacts/junit_37.xml /logs/artifacts/junit_38.xml /logs/artifacts/junit_39.xml /logs/artifacts/junit_40.xml
go: updates to go.mod needed, disabled by -mod=vendor
... skipping 5 lines ...
	to update it:
	go mod tidy
Sun Aug  7 16:33:55 UTC 2022 go1.19 /home/prow/go/src/k8s.io/kubernetes$ env KUBECONFIG=/root/.kube/config KUBE_TEST_REPO_LIST=/home/prow/go/pkg/csiprow.RqsVrLN1c4/e2e-repo-list ginkgo -v -focus=External.Storage.*(\[Serial\]|\[Disruptive\]) -skip=\[Feature:|Disruptive /home/prow/go/pkg/csiprow.RqsVrLN1c4/e2e.test -- -report-dir /logs/artifacts -storage.testdriver=/home/prow/go/pkg/csiprow.RqsVrLN1c4/test-driver.yaml
Aug  7 16:33:57.415: INFO: Driver loaded from path [/home/prow/go/pkg/csiprow.RqsVrLN1c4/test-driver.yaml]: &{DriverInfo:{Name:hostpath.csi.k8s.io InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max: Min:1Mi} SupportedFsType:map[:{}] SupportedMountOption:map[] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true multipods:true nodeExpansion:true persistence:true singleNodeVolume:true snapshotDataSource:true topology:true] RequiredAccessModes:[] TopologyKeys:[] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:true FromFile: FromExistingClassName:} SnapshotClass:{FromName:true FromFile: FromExistingClassName:} InlineVolumes:[{Attributes:map[] Shared:false ReadOnly:false}] ClientNodeName:csi-prow-worker2 Timeouts:map[]}
Aug  7 16:33:57.488: INFO: The --provider flag is not set. Continuing as if --provider=skeleton had been used.
I0807 16:33:57.489046  103160 e2e.go:129] Starting e2e run "198f53f3-1501-4c3a-b376-a1f7a047b992" on Ginkgo node 1
{"msg":"Test Suite starting","total":4,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1659890035 - Will randomize all specs
Will run 4 of 5976 specs

Aug  7 16:33:57.560: INFO: >>> kubeConfig: /root/.kube/config
... skipping 113 lines ...

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_stress.go:89
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSAug  7 16:33:57.782: INFO: Running AfterSuite actions on all nodes
Aug  7 16:33:57.782: INFO: Running AfterSuite actions on node 1
Aug  7 16:33:57.782: INFO: Dumping logs locally to: /logs/artifacts
Aug  7 16:33:57.783: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec ../../cluster/log-dump/log-dump.sh: no such file or directory

JUnit report was created: /logs/artifacts/junit_01.xml
{"msg":"Test Suite completed","total":4,"completed":0,"skipped":5976,"failed":0}

Ran 0 of 5976 Specs in 0.226 seconds
SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 5976 Skipped
PASS

Ginkgo ran 1 suite in 2.227291405s
Test Suite Passed
Sun Aug  7 16:33:57 UTC 2022 go1.18 /home/prow/go/src/k8s.io/kubernetes$ go run /home/prow/go/src/github.com/kubernetes-csi/csi-driver-host-path/release-tools/filter-junit.go -t=External.Storage|CSI.mock.volume -o /logs/artifacts/junit_serial.xml /logs/artifacts/junit_01.xml /logs/artifacts/junit_02.xml /logs/artifacts/junit_03.xml /logs/artifacts/junit_04.xml /logs/artifacts/junit_05.xml /logs/artifacts/junit_06.xml /logs/artifacts/junit_07.xml /logs/artifacts/junit_08.xml /logs/artifacts/junit_09.xml /logs/artifacts/junit_10.xml /logs/artifacts/junit_11.xml /logs/artifacts/junit_12.xml /logs/artifacts/junit_13.xml /logs/artifacts/junit_14.xml /logs/artifacts/junit_15.xml /logs/artifacts/junit_16.xml /logs/artifacts/junit_17.xml /logs/artifacts/junit_18.xml /logs/artifacts/junit_19.xml /logs/artifacts/junit_20.xml /logs/artifacts/junit_21.xml /logs/artifacts/junit_22.xml /logs/artifacts/junit_23.xml /logs/artifacts/junit_24.xml /logs/artifacts/junit_25.xml /logs/artifacts/junit_26.xml /logs/artifacts/junit_27.xml /logs/artifacts/junit_28.xml /logs/artifacts/junit_29.xml /logs/artifacts/junit_30.xml /logs/artifacts/junit_31.xml /logs/artifacts/junit_32.xml /logs/artifacts/junit_33.xml /logs/artifacts/junit_34.xml /logs/artifacts/junit_35.xml /logs/artifacts/junit_36.xml /logs/artifacts/junit_37.xml /logs/artifacts/junit_38.xml /logs/artifacts/junit_39.xml /logs/artifacts/junit_40.xml
go: updates to go.mod needed, disabled by -mod=vendor
... skipping 22 lines ...