This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 67 succeeded
Started2022-08-08 16:18
Elapsed18m2s
Revisionmaster

No Test Failures!


Show 67 Passed Tests

Show 11 Skipped Tests

Error lines from build-log.txt

... skipping 49 lines ...
non alpha feature gates for latest Kubernetes: CSI_PROW_E2E_GATES_LATEST=
non alpha E2E feature gates: CSI_PROW_E2E_GATES=
external-snapshotter version tag: CSI_SNAPSHOTTER_VERSION=master
tests that need to be skipped: CSI_PROW_E2E_SKIP=Disruptive
work directory: CSI_PROW_WORK=/home/prow/go/pkg/csiprow.3zhlDRENWi
artifacts: ARTIFACTS=/logs/artifacts
Mon Aug  8 16:18:17 UTC 2022 go1.19 $ curl --fail --location -o /home/prow/go/pkg/csiprow.3zhlDRENWi/bin/kind https://github.com/kubernetes-sigs/kind/releases/download/v0.11.1/kind-linux-amd64
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0

100 6660k  100 6660k    0     0  22.8M      0 --:--:-- --:--:-- --:--:-- 22.8M
No kind clusters found.
INFO: kind-config.yaml:
... skipping 169 lines ...
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at 306d58d Merge pull request #383 from pohly/changelog-5.0.0
Mon Aug  8 16:20:19 UTC 2022 go1.19 /home/prow/go/src/github.com/kubernetes-csi/csi-test$ git clean -fdx
Mon Aug  8 16:20:19 UTC 2022 go1.19 /home/prow/go/src/github.com/kubernetes-csi/csi-test/cmd/csi-sanity$ curl --fail --location https://dl.google.com/go/go1.18.linux-amd64.tar.gz
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
 17  135M   17 24.1M    0     0  34.5M      0  0:00:03 --:--:--  0:00:03 34.5M
 41  135M   41 55.4M    0     0  32.8M      0  0:00:04  0:00:01  0:00:03 32.8M
 78  135M   78  105M    0     0  39.4M      0  0:00:03  0:00:02  0:00:01 39.4M
100  135M  100  135M    0     0  37.0M      0  0:00:03  0:00:03 --:--:-- 37.0M
Mon Aug  8 16:20:22 UTC 2022 go1.18 /home/prow/go/src/github.com/kubernetes-csi/csi-test/cmd/csi-sanity$ go build -o /home/prow/go/pkg/csiprow.3zhlDRENWi/csi-sanity
Mon Aug  8 16:20:33 UTC 2022 go1.19 $ /home/prow/go/pkg/csiprow.3zhlDRENWi/csi-sanity -ginkgo.v -csi.junitfile /logs/artifacts/junit_sanity.xml -csi.endpoint dns:///172.18.0.4:32282 -csi.stagingdir /tmp/staging -csi.mountdir /tmp/mount -csi.createstagingpathcmd /home/prow/go/pkg/csiprow.3zhlDRENWi/mkdir_in_pod.sh -csi.createmountpathcmd /home/prow/go/pkg/csiprow.3zhlDRENWi/mkdir_in_pod.sh -csi.removestagingpathcmd /home/prow/go/pkg/csiprow.3zhlDRENWi/rmdir_in_pod.sh -csi.removemountpathcmd /home/prow/go/pkg/csiprow.3zhlDRENWi/rmdir_in_pod.sh -csi.checkpathcmd /home/prow/go/pkg/csiprow.3zhlDRENWi/checkdir_in_pod.sh
Running Suite: CSI Driver Test Suite - /home/prow/go/src/github.com/kubernetes-csi/csi-driver-host-path
=======================================================================================================
Random Seed: 1659975633

Will run 77 of 78 specs
------------------------------
ExpandVolume [Controller Server]
  should fail if no volume id is given
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1528
STEP: connecting to CSI driver 08/08/22 16:20:33.536
STEP: creating mount and staging directories 08/08/22 16:20:33.557
------------------------------
• [0.827 seconds]
ExpandVolume [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  should fail if no volume id is given
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1528

  Begin Captured GinkgoWriter Output >>
    STEP: connecting to CSI driver 08/08/22 16:20:33.536
    STEP: creating mount and staging directories 08/08/22 16:20:33.557
  << End Captured GinkgoWriter Output
------------------------------
ExpandVolume [Controller Server]
  should fail if no capacity range is given
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1545
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:34.363
STEP: creating mount and staging directories 08/08/22 16:20:34.363
------------------------------
• [0.707 seconds]
ExpandVolume [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  should fail if no capacity range is given
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1545

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:34.363
    STEP: creating mount and staging directories 08/08/22 16:20:34.363
  << End Captured GinkgoWriter Output
... skipping 76 lines ...
  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:37.281
    STEP: creating mount and staging directories 08/08/22 16:20:37.281
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] ListVolumes
  should fail when an invalid starting_token is passed
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:194
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:38.023
STEP: creating mount and staging directories 08/08/22 16:20:38.024
------------------------------
• [0.726 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ListVolumes
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:175
    should fail when an invalid starting_token is passed
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:194

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:38.023
    STEP: creating mount and staging directories 08/08/22 16:20:38.024
  << End Captured GinkgoWriter Output
... skipping 23 lines ...
------------------------------
P [PENDING]
Controller Service [Controller Server] ListVolumes pagination should detect volumes added between pages and accept tokens when the last volume from a page is deleted
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:268
------------------------------
Controller Service [Controller Server] CreateVolume
  should fail when no name is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:376
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:39.544
STEP: creating mount and staging directories 08/08/22 16:20:39.545
------------------------------
• [0.688 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  CreateVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:369
    should fail when no name is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:376

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:39.544
    STEP: creating mount and staging directories 08/08/22 16:20:39.545
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] CreateVolume
  should fail when no volume capabilities are provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:391
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:40.233
STEP: creating mount and staging directories 08/08/22 16:20:40.233
------------------------------
• [0.698 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  CreateVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:369
    should fail when no volume capabilities are provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:391

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:40.233
    STEP: creating mount and staging directories 08/08/22 16:20:40.233
  << End Captured GinkgoWriter Output
... skipping 38 lines ...
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:41.703
    STEP: creating mount and staging directories 08/08/22 16:20:41.703
    STEP: creating a volume 08/08/22 16:20:42.077
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] CreateVolume
  should not fail when requesting to create a volume with already existing name and same capacity
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:460
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:42.514
STEP: creating mount and staging directories 08/08/22 16:20:42.515
STEP: creating a volume 08/08/22 16:20:42.883
------------------------------
• [0.778 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  CreateVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:369
    should not fail when requesting to create a volume with already existing name and same capacity
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:460

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:42.514
    STEP: creating mount and staging directories 08/08/22 16:20:42.515
    STEP: creating a volume 08/08/22 16:20:42.883
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] CreateVolume
  should fail when requesting to create a volume with already existing name and different capacity
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:501
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:43.292
STEP: creating mount and staging directories 08/08/22 16:20:43.292
STEP: creating a volume 08/08/22 16:20:43.668
------------------------------
• [0.779 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  CreateVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:369
    should fail when requesting to create a volume with already existing name and different capacity
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:501

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:43.292
    STEP: creating mount and staging directories 08/08/22 16:20:43.292
    STEP: creating a volume 08/08/22 16:20:43.668
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] CreateVolume
  should not fail when creating volume with maximum-length name
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:545
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:44.071
STEP: creating mount and staging directories 08/08/22 16:20:44.072
STEP: creating a volume 08/08/22 16:20:44.431
------------------------------
• [0.763 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  CreateVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:369
    should not fail when creating volume with maximum-length name
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:545

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:44.071
    STEP: creating mount and staging directories 08/08/22 16:20:44.072
    STEP: creating a volume 08/08/22 16:20:44.431
... skipping 21 lines ...
    STEP: creating mount and staging directories 08/08/22 16:20:44.835
    STEP: creating a snapshot 08/08/22 16:20:45.198
    STEP: creating a volume from source snapshot 08/08/22 16:20:45.205
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] CreateVolume
  should fail when the volume source snapshot is not found
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:595
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:45.617
STEP: creating mount and staging directories 08/08/22 16:20:45.617
STEP: creating a volume from source snapshot 08/08/22 16:20:45.979
------------------------------
• [0.727 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  CreateVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:369
    should fail when the volume source snapshot is not found
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:595

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:45.617
    STEP: creating mount and staging directories 08/08/22 16:20:45.617
    STEP: creating a volume from source snapshot 08/08/22 16:20:45.979
... skipping 20 lines ...
    STEP: creating mount and staging directories 08/08/22 16:20:46.344
    STEP: creating a volume 08/08/22 16:20:46.706
    STEP: creating a volume from source volume 08/08/22 16:20:46.708
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] CreateVolume
  should fail when the volume source volume is not found
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:641
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:47.081
STEP: creating mount and staging directories 08/08/22 16:20:47.081
STEP: creating a volume from source snapshot 08/08/22 16:20:47.433
------------------------------
• [0.704 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  CreateVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:369
    should fail when the volume source volume is not found
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:641

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:47.081
    STEP: creating mount and staging directories 08/08/22 16:20:47.081
    STEP: creating a volume from source snapshot 08/08/22 16:20:47.433
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] DeleteVolume
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:671
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:47.785
STEP: creating mount and staging directories 08/08/22 16:20:47.785
------------------------------
• [0.704 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  DeleteVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:664
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:671

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:47.785
    STEP: creating mount and staging directories 08/08/22 16:20:47.785
  << End Captured GinkgoWriter Output
... skipping 38 lines ...
    STEP: creating mount and staging directories 08/08/22 16:20:49.172
    STEP: creating a volume 08/08/22 16:20:49.504
    STEP: deleting a volume 08/08/22 16:20:49.506
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] ValidateVolumeCapabilities
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:734
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:49.887
STEP: creating mount and staging directories 08/08/22 16:20:49.887
------------------------------
• [0.730 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ValidateVolumeCapabilities
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:733
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:734

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:49.887
    STEP: creating mount and staging directories 08/08/22 16:20:49.887
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] ValidateVolumeCapabilities
  should fail when no volume capabilities are provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:748
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:50.618
STEP: creating mount and staging directories 08/08/22 16:20:50.618
STEP: creating a single node writer volume 08/08/22 16:20:51.013
------------------------------
• [0.759 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ValidateVolumeCapabilities
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:733
    should fail when no volume capabilities are provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:748

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:50.618
    STEP: creating mount and staging directories 08/08/22 16:20:50.618
    STEP: creating a single node writer volume 08/08/22 16:20:51.013
... skipping 20 lines ...
    STEP: creating mount and staging directories 08/08/22 16:20:51.377
    STEP: creating a single node writer volume 08/08/22 16:20:51.733
    STEP: validating volume capabilities 08/08/22 16:20:51.735
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] ValidateVolumeCapabilities
  should fail when the requested volume does not exist
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:825
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:52.108
STEP: creating mount and staging directories 08/08/22 16:20:52.108
------------------------------
• [0.762 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ValidateVolumeCapabilities
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:733
    should fail when the requested volume does not exist
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:825

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:52.108
    STEP: creating mount and staging directories 08/08/22 16:20:52.108
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] ControllerPublishVolume
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:852
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:52.87
STEP: creating mount and staging directories 08/08/22 16:20:52.871
------------------------------
S [SKIPPED] [0.789 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ControllerPublishVolume [BeforeEach]
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:846
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:852

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:52.87
    STEP: creating mount and staging directories 08/08/22 16:20:52.871
  << End Captured GinkgoWriter Output

  ControllerPublishVolume not supported
  In [BeforeEach] at: /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:848
------------------------------
Controller Service [Controller Server] ControllerPublishVolume
  should fail when no node id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:867
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:53.66
STEP: creating mount and staging directories 08/08/22 16:20:53.66
------------------------------
S [SKIPPED] [0.915 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ControllerPublishVolume [BeforeEach]
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:846
    should fail when no node id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:867

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:53.66
    STEP: creating mount and staging directories 08/08/22 16:20:53.66
  << End Captured GinkgoWriter Output

  ControllerPublishVolume not supported
  In [BeforeEach] at: /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:848
------------------------------
Controller Service [Controller Server] ControllerPublishVolume
  should fail when no volume capability is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:883
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:54.575
STEP: creating mount and staging directories 08/08/22 16:20:54.575
------------------------------
S [SKIPPED] [0.737 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ControllerPublishVolume [BeforeEach]
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:846
    should fail when no volume capability is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:883

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:54.575
    STEP: creating mount and staging directories 08/08/22 16:20:54.575
  << End Captured GinkgoWriter Output

  ControllerPublishVolume not supported
  In [BeforeEach] at: /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:848
------------------------------
Controller Service [Controller Server] ControllerPublishVolume
  should fail when publishing more volumes than the node max attach limit
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:900
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:55.313
STEP: creating mount and staging directories 08/08/22 16:20:55.313
------------------------------
S [SKIPPED] [0.748 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ControllerPublishVolume [BeforeEach]
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:846
    should fail when publishing more volumes than the node max attach limit
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:900

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:55.313
    STEP: creating mount and staging directories 08/08/22 16:20:55.313
  << End Captured GinkgoWriter Output

  ControllerPublishVolume not supported
  In [BeforeEach] at: /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:848
------------------------------
Controller Service [Controller Server] ControllerPublishVolume
  should fail when the volume does not exist
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:940
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:56.061
STEP: creating mount and staging directories 08/08/22 16:20:56.061
------------------------------
S [SKIPPED] [0.727 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ControllerPublishVolume [BeforeEach]
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:846
    should fail when the volume does not exist
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:940

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:56.061
    STEP: creating mount and staging directories 08/08/22 16:20:56.061
  << End Captured GinkgoWriter Output

  ControllerPublishVolume not supported
  In [BeforeEach] at: /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:848
------------------------------
Controller Service [Controller Server] ControllerPublishVolume
  should fail when the node does not exist
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:962
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:56.788
STEP: creating mount and staging directories 08/08/22 16:20:56.788
------------------------------
S [SKIPPED] [0.713 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ControllerPublishVolume [BeforeEach]
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:846
    should fail when the node does not exist
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:962

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:56.788
    STEP: creating mount and staging directories 08/08/22 16:20:56.788
  << End Captured GinkgoWriter Output

  ControllerPublishVolume not supported
  In [BeforeEach] at: /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:848
------------------------------
Controller Service [Controller Server] ControllerPublishVolume
  should fail when the volume is already published but is incompatible
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1001
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:57.5
STEP: creating mount and staging directories 08/08/22 16:20:57.501
------------------------------
S [SKIPPED] [0.689 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ControllerPublishVolume [BeforeEach]
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:846
    should fail when the volume is already published but is incompatible
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1001

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:57.5
    STEP: creating mount and staging directories 08/08/22 16:20:57.501
  << End Captured GinkgoWriter Output
... skipping 43 lines ...
  << End Captured GinkgoWriter Output

  Controller Publish, UnpublishVolume not supported
  In [BeforeEach] at: /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1059
------------------------------
Controller Service [Controller Server] ControllerUnpublishVolume
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1079
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:59.61
STEP: creating mount and staging directories 08/08/22 16:20:59.61
------------------------------
S [SKIPPED] [0.723 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ControllerUnpublishVolume [BeforeEach]
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1073
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1079

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:20:59.61
    STEP: creating mount and staging directories 08/08/22 16:20:59.61
  << End Captured GinkgoWriter Output
... skipping 39 lines ...
  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:01.068
    STEP: creating mount and staging directories 08/08/22 16:21:01.068
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodePublishVolume
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:379
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:01.81
STEP: creating mount and staging directories 08/08/22 16:21:01.81
------------------------------
• [0.777 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodePublishVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:378
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:379

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:01.81
    STEP: creating mount and staging directories 08/08/22 16:21:01.81
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodePublishVolume
  should fail when no target path is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:393
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:02.587
STEP: creating mount and staging directories 08/08/22 16:21:02.587
------------------------------
• [0.767 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodePublishVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:378
    should fail when no target path is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:393

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:02.587
    STEP: creating mount and staging directories 08/08/22 16:21:02.587
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodePublishVolume
  should fail when no volume capability is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:408
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:03.354
STEP: creating mount and staging directories 08/08/22 16:21:03.355
------------------------------
• [0.795 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodePublishVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:378
    should fail when no volume capability is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:408

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:03.354
    STEP: creating mount and staging directories 08/08/22 16:21:03.355
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeUnpublishVolume
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:427
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:04.15
STEP: creating mount and staging directories 08/08/22 16:21:04.15
------------------------------
• [0.780 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeUnpublishVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:426
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:427

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:04.15
    STEP: creating mount and staging directories 08/08/22 16:21:04.15
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeUnpublishVolume
  should fail when no target path is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:439
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:04.93
STEP: creating mount and staging directories 08/08/22 16:21:04.93
------------------------------
• [0.779 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeUnpublishVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:426
    should fail when no target path is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:439

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:04.93
    STEP: creating mount and staging directories 08/08/22 16:21:04.93
  << End Captured GinkgoWriter Output
... skipping 31 lines ...
    STEP: Checking the target path exists 08/08/22 16:21:06.14
    STEP: Unpublishing the volume 08/08/22 16:21:06.311
    STEP: Checking the target path was removed 08/08/22 16:21:06.315
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeStageVolume
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:525
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:06.897
STEP: creating mount and staging directories 08/08/22 16:21:06.897
------------------------------
• [0.757 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeStageVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:512
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:525

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:06.897
    STEP: creating mount and staging directories 08/08/22 16:21:06.897
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeStageVolume
  should fail when no staging target path is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:544
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:07.654
STEP: creating mount and staging directories 08/08/22 16:21:07.654
------------------------------
• [0.768 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeStageVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:512
    should fail when no staging target path is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:544

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:07.654
    STEP: creating mount and staging directories 08/08/22 16:21:07.654
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeStageVolume
  should fail when no volume capability is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:563
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:08.422
STEP: creating mount and staging directories 08/08/22 16:21:08.422
STEP: creating a single node writer volume 08/08/22 16:21:08.813
------------------------------
• [0.759 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeStageVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:512
    should fail when no volume capability is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:563

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:08.422
    STEP: creating mount and staging directories 08/08/22 16:21:08.422
    STEP: creating a single node writer volume 08/08/22 16:21:08.813
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeUnstageVolume
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:614
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:09.182
STEP: creating mount and staging directories 08/08/22 16:21:09.182
------------------------------
• [0.712 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeUnstageVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:607
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:614

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:09.182
    STEP: creating mount and staging directories 08/08/22 16:21:09.182
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeUnstageVolume
  should fail when no staging target path is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:628
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:09.893
STEP: creating mount and staging directories 08/08/22 16:21:09.893
------------------------------
• [0.738 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeUnstageVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:607
    should fail when no staging target path is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:628

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:09.893
    STEP: creating mount and staging directories 08/08/22 16:21:09.893
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeGetVolumeStats
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:650
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:10.631
STEP: creating mount and staging directories 08/08/22 16:21:10.631
------------------------------
• [1.227 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeGetVolumeStats
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:643
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:650

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:10.631
    STEP: creating mount and staging directories 08/08/22 16:21:10.631
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeGetVolumeStats
  should fail when no volume path is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:664
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:11.859
STEP: creating mount and staging directories 08/08/22 16:21:11.859
------------------------------
• [1.547 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeGetVolumeStats
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:643
    should fail when no volume path is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:664

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:11.859
    STEP: creating mount and staging directories 08/08/22 16:21:11.859
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeGetVolumeStats
  should fail when volume is not found
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:678
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:13.406
STEP: creating mount and staging directories 08/08/22 16:21:13.407
------------------------------
• [1.498 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeGetVolumeStats
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:643
    should fail when volume is not found
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:678

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:13.406
    STEP: creating mount and staging directories 08/08/22 16:21:13.407
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeGetVolumeStats
  should fail when volume does not exist on the specified path
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:693
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:14.905
STEP: creating mount and staging directories 08/08/22 16:21:14.905
STEP: creating a single node writer volume for expansion 08/08/22 16:21:15.614
STEP: getting a node id 08/08/22 16:21:15.617
STEP: node staging volume 08/08/22 16:21:15.619
... skipping 2 lines ...
------------------------------
• [1.492 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeGetVolumeStats
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:643
    should fail when volume does not exist on the specified path
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:693

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:14.905
    STEP: creating mount and staging directories 08/08/22 16:21:14.905
    STEP: creating a single node writer volume for expansion 08/08/22 16:21:15.614
    STEP: getting a node id 08/08/22 16:21:15.617
    STEP: node staging volume 08/08/22 16:21:15.619
    STEP: publishing the volume on a node 08/08/22 16:21:15.621
    STEP: Get node volume stats 08/08/22 16:21:15.669
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeExpandVolume
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:740
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:16.398
STEP: creating mount and staging directories 08/08/22 16:21:16.398
------------------------------
• [1.276 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeExpandVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:732
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:740

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:16.398
    STEP: creating mount and staging directories 08/08/22 16:21:16.398
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeExpandVolume
  should fail when no volume path is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:755
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:17.674
STEP: creating mount and staging directories 08/08/22 16:21:17.675
STEP: creating a single node writer volume for expansion 08/08/22 16:21:18.399
------------------------------
• [1.514 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeExpandVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:732
    should fail when no volume path is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:755

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:17.674
    STEP: creating mount and staging directories 08/08/22 16:21:17.675
    STEP: creating a single node writer volume for expansion 08/08/22 16:21:18.399
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeExpandVolume
  should fail when volume is not found
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:774
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:19.189
STEP: creating mount and staging directories 08/08/22 16:21:19.189
------------------------------
• [1.414 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeExpandVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:732
    should fail when volume is not found
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:774

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:19.189
    STEP: creating mount and staging directories 08/08/22 16:21:19.189
  << End Captured GinkgoWriter Output
... skipping 275 lines ...
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:33.831
    STEP: creating mount and staging directories 08/08/22 16:21:33.832
    STEP: creating required new volumes 08/08/22 16:21:34.333
  << End Captured GinkgoWriter Output
------------------------------
CreateSnapshot [Controller Server]
  should fail when no name is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1422
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:34.997
STEP: creating mount and staging directories 08/08/22 16:21:34.998
------------------------------
• [1.213 seconds]
CreateSnapshot [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  should fail when no name is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1422

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:34.997
    STEP: creating mount and staging directories 08/08/22 16:21:34.998
  << End Captured GinkgoWriter Output
------------------------------
CreateSnapshot [Controller Server]
  should fail when no source volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1439
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:36.21
STEP: creating mount and staging directories 08/08/22 16:21:36.21
------------------------------
• [1.538 seconds]
CreateSnapshot [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  should fail when no source volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1439

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:36.21
    STEP: creating mount and staging directories 08/08/22 16:21:36.21
  << End Captured GinkgoWriter Output
... skipping 21 lines ...
    STEP: creating a volume 08/08/22 16:21:38.461
    STEP: creating a snapshot 08/08/22 16:21:38.464
    STEP: creating a snapshot with the same name and source volume ID 08/08/22 16:21:38.474
  << End Captured GinkgoWriter Output
------------------------------
CreateSnapshot [Controller Server]
  should fail when requesting to create a snapshot with already existing name and different source volume ID
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1470
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:39.218
STEP: creating mount and staging directories 08/08/22 16:21:39.219
STEP: creating a snapshot 08/08/22 16:21:39.759
STEP: creating a new source volume 08/08/22 16:21:39.773
STEP: creating a snapshot with the same name but different source volume ID 08/08/22 16:21:39.778
I0808 16:21:39.833028   11824 resources.go:320] deleting snapshot ID 2eb56fd9-1736-11ed-8cf2-8a7df23e3f5f
------------------------------
• [1.086 seconds]
CreateSnapshot [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  should fail when requesting to create a snapshot with already existing name and different source volume ID
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1470

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:39.218
    STEP: creating mount and staging directories 08/08/22 16:21:39.219
    STEP: creating a snapshot 08/08/22 16:21:39.759
... skipping 22 lines ...
    STEP: creating mount and staging directories 08/08/22 16:21:40.305
    STEP: creating a volume 08/08/22 16:21:40.767
    STEP: creating a snapshot 08/08/22 16:21:40.77
  << End Captured GinkgoWriter Output
------------------------------
DeleteSnapshot [Controller Server]
  should fail when no snapshot id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1366
STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:41.297
STEP: creating mount and staging directories 08/08/22 16:21:41.298
------------------------------
• [0.923 seconds]
DeleteSnapshot [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  should fail when no snapshot id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1366

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.4:32282 08/08/22 16:21:41.297
    STEP: creating mount and staging directories 08/08/22 16:21:41.298
  << End Captured GinkgoWriter Output
... skipping 106 lines ...
[ReportAfterSuite] PASSED [0.012 seconds]
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
------------------------------

Ran 67 of 78 Specs in 74.294 seconds
SUCCESS! -- 67 Passed | 0 Failed | 1 Pending | 10 Skipped
Mon Aug  8 16:21:47 UTC 2022 go1.19 $ git init /home/prow/go/src/k8s.io/kubernetes
Initialized empty Git repository in /home/prow/go/src/k8s.io/kubernetes/.git/
Mon Aug  8 16:21:47 UTC 2022 go1.19 /home/prow/go/src/k8s.io/kubernetes$ git fetch --depth=1 https://github.com/kubernetes/kubernetes v1.21.0
From https://github.com/kubernetes/kubernetes
 * tag                 v1.21.0    -> FETCH_HEAD
Mon Aug  8 16:22:00 UTC 2022 go1.19 /home/prow/go/src/k8s.io/kubernetes$ git checkout FETCH_HEAD
... skipping 11 lines ...
HEAD is now at cb303e61 Release commit for Kubernetes v1.21.0
Mon Aug  8 16:22:04 UTC 2022 go1.19 /home/prow/go/src/k8s.io/kubernetes$ git clean -fdx

Using a modified version of k/k/test/e2e:


Mon Aug  8 16:22:04 UTC 2022 go1.19 $ curl --fail --location https://dl.google.com/go/go1.16.linux-amd64.tar.gz
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  3  123M    3 4303k    0     0  16.1M      0  0:00:07 --:--:--  0:00:07 16.0M
 26  123M   26 32.3M    0     0  25.1M      0  0:00:04  0:00:01  0:00:03 25.1M
 45  123M   45 55.8M    0     0  24.9M      0  0:00:04  0:00:02  0:00:02 24.9M
 79  123M   79 98.1M    0     0  30.3M      0  0:00:04  0:00:03  0:00:01 30.3M
 98  123M   98  121M    0     0  28.6M      0  0:00:04  0:00:04 --:--:-- 28.6M
100  123M  100  123M    0     0  27.9M      0  0:00:04  0:00:04 --:--:-- 28.7M
Mon Aug  8 16:22:08 UTC 2022 go1.16 $ make WHAT=test/e2e/e2e.test -C/home/prow/go/src/k8s.io/kubernetes
make: Entering directory '/home/prow/go/src/k8s.io/kubernetes'
make[1]: Entering directory '/home/prow/go/src/k8s.io/kubernetes'
... skipping 276 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Inline-volume (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278

    Driver "hostpath.csi.k8s.io" does not support volume type "InlineVolume" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 113 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Inline-volume (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240

    Driver "hostpath.csi.k8s.io" does not support volume type "InlineVolume" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 29 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278

    Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 134 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267

    Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 321 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Inline-volume (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256

    Driver "hostpath.csi.k8s.io" does not support volume type "InlineVolume" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 113 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to use a volume in a pod with mismatched mode [Slow] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296

    Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 95 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256

    Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 50 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240

    Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 248 lines ...
STEP: Creating a kubernetes client
Aug  8 16:27:54.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
W0808 16:27:55.137441   64783 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Aug  8 16:27:55.137: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Aug  8 16:27:55.145: INFO: Driver didn't provide topology keys -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  8 16:27:55.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "topology-7204" for this suite.


S [SKIPPING] [0.903 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (delayed binding)] topology
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

    Driver didn't provide topology keys -- skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:124
------------------------------
... skipping 93 lines ...
STEP: Creating a kubernetes client
Aug  8 16:27:54.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
W0808 16:27:55.905275   64556 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Aug  8 16:27:55.905: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Aug  8 16:27:55.908: INFO: Driver didn't provide topology keys -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  8 16:27:55.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "topology-7660" for this suite.


S [SKIPPING] [1.464 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (immediate binding)] topology
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

    Driver didn't provide topology keys -- skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:124
------------------------------
... skipping 95 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to use a volume in a pod with mismatched mode [Slow] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296

    Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 122 lines ...

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
SS
------------------------------
External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode 
  should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296

[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  8 16:27:51.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volumemode
W0808 16:27:52.604294   64581 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Aug  8 16:27:52.604: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296
Aug  8 16:27:52.660: INFO: Creating resource for dynamic PV
Aug  8 16:27:52.660: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(hostpath.csi.k8s.io) supported size:{ 1Mi} 
STEP: creating a StorageClass volumemode-7304-e2e-scnvm24
STEP: creating a claim
Aug  8 16:27:53.074: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.ioqsjwm] to have phase Bound
Aug  8 16:27:53.866: INFO: PersistentVolumeClaim hostpath.csi.k8s.ioqsjwm found but phase is Pending instead of Bound.
Aug  8 16:27:55.874: INFO: PersistentVolumeClaim hostpath.csi.k8s.ioqsjwm found and phase=Bound (2.79991263s)
STEP: Creating pod
STEP: Waiting for the pod to fail
Aug  8 16:27:59.923: INFO: Deleting pod "pod-eb5cbc25-66cf-42a2-a47c-7d209f645325" in namespace "volumemode-7304"
Aug  8 16:27:59.928: INFO: Wait up to 5m0s for pod "pod-eb5cbc25-66cf-42a2-a47c-7d209f645325" to be fully deleted
STEP: Deleting pvc
Aug  8 16:28:31.939: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.ioqsjwm"
Aug  8 16:28:31.946: INFO: Waiting up to 5m0s for PersistentVolume pvc-eeab2e04-6df5-4f14-bd60-b03bdafe8f3c to get deleted
Aug  8 16:28:31.953: INFO: PersistentVolume pvc-eeab2e04-6df5-4f14-bd60-b03bdafe8f3c found and phase=Bound (7.022386ms)
... skipping 7 lines ...

• [SLOW TEST:45.062 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to use a volume in a pod with mismatched mode [Slow]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]","total":-1,"completed":1,"skipped":42,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
... skipping 49 lines ...
Aug  8 16:27:57.359: INFO: PersistentVolumeClaim hostpath.csi.k8s.io8klf6 found but phase is Pending instead of Bound.
Aug  8 16:27:59.364: INFO: PersistentVolumeClaim hostpath.csi.k8s.io8klf6 found but phase is Pending instead of Bound.
Aug  8 16:28:01.370: INFO: PersistentVolumeClaim hostpath.csi.k8s.io8klf6 found but phase is Pending instead of Bound.
Aug  8 16:28:03.375: INFO: PersistentVolumeClaim hostpath.csi.k8s.io8klf6 found and phase=Bound (6.018532763s)
STEP: Expanding non-expandable pvc
Aug  8 16:28:03.383: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Aug  8 16:28:03.392: INFO: Error updating pvc hostpath.csi.k8s.io8klf6: persistentvolumeclaims "hostpath.csi.k8s.io8klf6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:05.402: INFO: Error updating pvc hostpath.csi.k8s.io8klf6: persistentvolumeclaims "hostpath.csi.k8s.io8klf6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:07.403: INFO: Error updating pvc hostpath.csi.k8s.io8klf6: persistentvolumeclaims "hostpath.csi.k8s.io8klf6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:09.515: INFO: Error updating pvc hostpath.csi.k8s.io8klf6: persistentvolumeclaims "hostpath.csi.k8s.io8klf6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:11.403: INFO: Error updating pvc hostpath.csi.k8s.io8klf6: persistentvolumeclaims "hostpath.csi.k8s.io8klf6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:13.403: INFO: Error updating pvc hostpath.csi.k8s.io8klf6: persistentvolumeclaims "hostpath.csi.k8s.io8klf6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:15.402: INFO: Error updating pvc hostpath.csi.k8s.io8klf6: persistentvolumeclaims "hostpath.csi.k8s.io8klf6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:17.410: INFO: Error updating pvc hostpath.csi.k8s.io8klf6: persistentvolumeclaims "hostpath.csi.k8s.io8klf6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:19.404: INFO: Error updating pvc hostpath.csi.k8s.io8klf6: persistentvolumeclaims "hostpath.csi.k8s.io8klf6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:21.401: INFO: Error updating pvc hostpath.csi.k8s.io8klf6: persistentvolumeclaims "hostpath.csi.k8s.io8klf6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:23.402: INFO: Error updating pvc hostpath.csi.k8s.io8klf6: persistentvolumeclaims "hostpath.csi.k8s.io8klf6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:25.415: INFO: Error updating pvc hostpath.csi.k8s.io8klf6: persistentvolumeclaims "hostpath.csi.k8s.io8klf6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:27.406: INFO: Error updating pvc hostpath.csi.k8s.io8klf6: persistentvolumeclaims "hostpath.csi.k8s.io8klf6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:29.404: INFO: Error updating pvc hostpath.csi.k8s.io8klf6: persistentvolumeclaims "hostpath.csi.k8s.io8klf6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:31.407: INFO: Error updating pvc hostpath.csi.k8s.io8klf6: persistentvolumeclaims "hostpath.csi.k8s.io8klf6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:33.415: INFO: Error updating pvc hostpath.csi.k8s.io8klf6: persistentvolumeclaims "hostpath.csi.k8s.io8klf6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:33.426: INFO: Error updating pvc hostpath.csi.k8s.io8klf6: persistentvolumeclaims "hostpath.csi.k8s.io8klf6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Aug  8 16:28:33.426: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.io8klf6"
Aug  8 16:28:33.432: INFO: Waiting up to 5m0s for PersistentVolume pvc-1685dbe7-6193-4fbc-8886-3365375cba53 to get deleted
Aug  8 16:28:33.438: INFO: PersistentVolume pvc-1685dbe7-6193-4fbc-8886-3365375cba53 found and phase=Bound (5.413364ms)
Aug  8 16:28:38.442: INFO: PersistentVolume pvc-1685dbe7-6193-4fbc-8886-3365375cba53 was removed
STEP: Deleting sc
... skipping 8 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] volume-expand
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should not allow expansion of pvcs without AllowVolumeExpansion property
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":1,"skipped":248,"failed":0}

SSSSSSSSSSSSSS
------------------------------
External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should support readOnly directory specified in the volumeMount
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
... skipping 17 lines ...
Aug  8 16:27:53.987: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 16:27:54.246: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iossf6g] to have phase Bound
Aug  8 16:27:54.342: INFO: PersistentVolumeClaim hostpath.csi.k8s.iossf6g found but phase is Pending instead of Bound.
Aug  8 16:27:56.345: INFO: PersistentVolumeClaim hostpath.csi.k8s.iossf6g found and phase=Bound (2.099212538s)
STEP: Creating pod pod-subpath-test-dynamicpv-h75k
STEP: Creating a pod to test subpath
Aug  8 16:27:56.355: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-h75k" in namespace "provisioning-394" to be "Succeeded or Failed"
Aug  8 16:27:56.358: INFO: Pod "pod-subpath-test-dynamicpv-h75k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.838202ms
Aug  8 16:27:58.363: INFO: Pod "pod-subpath-test-dynamicpv-h75k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007645181s
Aug  8 16:28:00.369: INFO: Pod "pod-subpath-test-dynamicpv-h75k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013258565s
Aug  8 16:28:02.373: INFO: Pod "pod-subpath-test-dynamicpv-h75k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017543142s
Aug  8 16:28:04.377: INFO: Pod "pod-subpath-test-dynamicpv-h75k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021651076s
Aug  8 16:28:06.381: INFO: Pod "pod-subpath-test-dynamicpv-h75k": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02609061s
... skipping 11 lines ...
Aug  8 16:28:30.437: INFO: Pod "pod-subpath-test-dynamicpv-h75k": Phase="Pending", Reason="", readiness=false. Elapsed: 34.081959542s
Aug  8 16:28:32.443: INFO: Pod "pod-subpath-test-dynamicpv-h75k": Phase="Pending", Reason="", readiness=false. Elapsed: 36.087771239s
Aug  8 16:28:34.450: INFO: Pod "pod-subpath-test-dynamicpv-h75k": Phase="Pending", Reason="", readiness=false. Elapsed: 38.094673274s
Aug  8 16:28:36.455: INFO: Pod "pod-subpath-test-dynamicpv-h75k": Phase="Pending", Reason="", readiness=false. Elapsed: 40.099826527s
Aug  8 16:28:38.459: INFO: Pod "pod-subpath-test-dynamicpv-h75k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 42.103708489s
STEP: Saw pod success
Aug  8 16:28:38.459: INFO: Pod "pod-subpath-test-dynamicpv-h75k" satisfied condition "Succeeded or Failed"
Aug  8 16:28:38.465: INFO: Trying to get logs from node csi-prow-worker2 pod pod-subpath-test-dynamicpv-h75k container test-container-subpath-dynamicpv-h75k: <nil>
STEP: delete the pod
Aug  8 16:28:38.486: INFO: Waiting for pod pod-subpath-test-dynamicpv-h75k to disappear
Aug  8 16:28:38.490: INFO: Pod pod-subpath-test-dynamicpv-h75k no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-h75k
Aug  8 16:28:38.490: INFO: Deleting pod "pod-subpath-test-dynamicpv-h75k" in namespace "provisioning-394"
... skipping 16 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support readOnly directory specified in the volumeMount
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":47,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  8 16:28:43.542: INFO: Driver hostpath.csi.k8s.io doesn't support ext3 -- skipping
... skipping 55 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Inline-volume (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267

    Driver "hostpath.csi.k8s.io" does not support volume type "InlineVolume" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 21 lines ...
Aug  8 16:27:58.155: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iozxjwm] to have phase Bound
Aug  8 16:27:58.158: INFO: PersistentVolumeClaim hostpath.csi.k8s.iozxjwm found but phase is Pending instead of Bound.
Aug  8 16:28:00.163: INFO: PersistentVolumeClaim hostpath.csi.k8s.iozxjwm found but phase is Pending instead of Bound.
Aug  8 16:28:02.172: INFO: PersistentVolumeClaim hostpath.csi.k8s.iozxjwm found and phase=Bound (4.017157676s)
STEP: Creating pod pod-subpath-test-dynamicpv-mx86
STEP: Creating a pod to test multi_subpath
Aug  8 16:28:02.194: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-mx86" in namespace "provisioning-6231" to be "Succeeded or Failed"
Aug  8 16:28:02.202: INFO: Pod "pod-subpath-test-dynamicpv-mx86": Phase="Pending", Reason="", readiness=false. Elapsed: 7.875255ms
Aug  8 16:28:04.207: INFO: Pod "pod-subpath-test-dynamicpv-mx86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013263705s
Aug  8 16:28:06.212: INFO: Pod "pod-subpath-test-dynamicpv-mx86": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01767062s
Aug  8 16:28:08.217: INFO: Pod "pod-subpath-test-dynamicpv-mx86": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022431112s
Aug  8 16:28:10.221: INFO: Pod "pod-subpath-test-dynamicpv-mx86": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0266654s
Aug  8 16:28:12.227: INFO: Pod "pod-subpath-test-dynamicpv-mx86": Phase="Pending", Reason="", readiness=false. Elapsed: 10.03285667s
... skipping 21 lines ...
Aug  8 16:28:56.402: INFO: Pod "pod-subpath-test-dynamicpv-mx86": Phase="Pending", Reason="", readiness=false. Elapsed: 54.207732462s
Aug  8 16:28:58.411: INFO: Pod "pod-subpath-test-dynamicpv-mx86": Phase="Pending", Reason="", readiness=false. Elapsed: 56.217110445s
Aug  8 16:29:00.419: INFO: Pod "pod-subpath-test-dynamicpv-mx86": Phase="Pending", Reason="", readiness=false. Elapsed: 58.225083683s
Aug  8 16:29:02.423: INFO: Pod "pod-subpath-test-dynamicpv-mx86": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.229298141s
Aug  8 16:29:04.429: INFO: Pod "pod-subpath-test-dynamicpv-mx86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m2.235207738s
STEP: Saw pod success
Aug  8 16:29:04.429: INFO: Pod "pod-subpath-test-dynamicpv-mx86" satisfied condition "Succeeded or Failed"
Aug  8 16:29:04.435: INFO: Trying to get logs from node csi-prow-worker2 pod pod-subpath-test-dynamicpv-mx86 container test-container-subpath-dynamicpv-mx86: <nil>
STEP: delete the pod
Aug  8 16:29:04.490: INFO: Waiting for pod pod-subpath-test-dynamicpv-mx86 to disappear
Aug  8 16:29:04.500: INFO: Pod pod-subpath-test-dynamicpv-mx86 no longer exists
STEP: Deleting pod
Aug  8 16:29:04.500: INFO: Deleting pod "pod-subpath-test-dynamicpv-mx86" in namespace "provisioning-6231"
... skipping 14 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support creating multiple subpath from same volumes [Slow]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:294
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]","total":-1,"completed":1,"skipped":56,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
... skipping 46 lines ...
Aug  8 16:28:38.533: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 16:28:38.551: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io4255w] to have phase Bound
Aug  8 16:28:38.560: INFO: PersistentVolumeClaim hostpath.csi.k8s.io4255w found but phase is Pending instead of Bound.
Aug  8 16:28:40.564: INFO: PersistentVolumeClaim hostpath.csi.k8s.io4255w found and phase=Bound (2.01382655s)
STEP: Expanding non-expandable pvc
Aug  8 16:28:40.571: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Aug  8 16:28:40.579: INFO: Error updating pvc hostpath.csi.k8s.io4255w: persistentvolumeclaims "hostpath.csi.k8s.io4255w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:42.590: INFO: Error updating pvc hostpath.csi.k8s.io4255w: persistentvolumeclaims "hostpath.csi.k8s.io4255w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:44.587: INFO: Error updating pvc hostpath.csi.k8s.io4255w: persistentvolumeclaims "hostpath.csi.k8s.io4255w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:46.588: INFO: Error updating pvc hostpath.csi.k8s.io4255w: persistentvolumeclaims "hostpath.csi.k8s.io4255w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:48.596: INFO: Error updating pvc hostpath.csi.k8s.io4255w: persistentvolumeclaims "hostpath.csi.k8s.io4255w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:50.590: INFO: Error updating pvc hostpath.csi.k8s.io4255w: persistentvolumeclaims "hostpath.csi.k8s.io4255w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:52.589: INFO: Error updating pvc hostpath.csi.k8s.io4255w: persistentvolumeclaims "hostpath.csi.k8s.io4255w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:54.589: INFO: Error updating pvc hostpath.csi.k8s.io4255w: persistentvolumeclaims "hostpath.csi.k8s.io4255w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:56.602: INFO: Error updating pvc hostpath.csi.k8s.io4255w: persistentvolumeclaims "hostpath.csi.k8s.io4255w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:28:58.591: INFO: Error updating pvc hostpath.csi.k8s.io4255w: persistentvolumeclaims "hostpath.csi.k8s.io4255w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:29:00.588: INFO: Error updating pvc hostpath.csi.k8s.io4255w: persistentvolumeclaims "hostpath.csi.k8s.io4255w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:29:02.589: INFO: Error updating pvc hostpath.csi.k8s.io4255w: persistentvolumeclaims "hostpath.csi.k8s.io4255w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:29:04.610: INFO: Error updating pvc hostpath.csi.k8s.io4255w: persistentvolumeclaims "hostpath.csi.k8s.io4255w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:29:06.590: INFO: Error updating pvc hostpath.csi.k8s.io4255w: persistentvolumeclaims "hostpath.csi.k8s.io4255w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:29:08.597: INFO: Error updating pvc hostpath.csi.k8s.io4255w: persistentvolumeclaims "hostpath.csi.k8s.io4255w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:29:10.645: INFO: Error updating pvc hostpath.csi.k8s.io4255w: persistentvolumeclaims "hostpath.csi.k8s.io4255w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 16:29:10.699: INFO: Error updating pvc hostpath.csi.k8s.io4255w: persistentvolumeclaims "hostpath.csi.k8s.io4255w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Aug  8 16:29:10.699: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.io4255w"
Aug  8 16:29:10.736: INFO: Waiting up to 5m0s for PersistentVolume pvc-1b284a7b-cb10-4947-bf9d-827bda7f9285 to get deleted
Aug  8 16:29:10.776: INFO: PersistentVolume pvc-1b284a7b-cb10-4947-bf9d-827bda7f9285 found and phase=Bound (39.956114ms)
Aug  8 16:29:15.783: INFO: PersistentVolume pvc-1b284a7b-cb10-4947-bf9d-827bda7f9285 was removed
STEP: Deleting sc
... skipping 8 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] volume-expand
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should not allow expansion of pvcs without AllowVolumeExpansion property
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":2,"skipped":262,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  8 16:29:16.087: INFO: Driver hostpath.csi.k8s.io doesn't support ext4 -- skipping
... skipping 132 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should create read/write inline ephemeral volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:161
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":1,"skipped":16,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  8 16:29:51.583: INFO: Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping
... skipping 247 lines ...
Aug  8 16:27:52.748: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 16:27:53.018: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iog6vnh] to have phase Bound
Aug  8 16:27:53.414: INFO: PersistentVolumeClaim hostpath.csi.k8s.iog6vnh found but phase is Pending instead of Bound.
Aug  8 16:27:55.424: INFO: PersistentVolumeClaim hostpath.csi.k8s.iog6vnh found and phase=Bound (2.406438041s)
STEP: Creating pod pod-subpath-test-dynamicpv-k85c
STEP: Creating a pod to test atomic-volume-subpath
Aug  8 16:27:55.496: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-k85c" in namespace "provisioning-7616" to be "Succeeded or Failed"
Aug  8 16:27:55.512: INFO: Pod "pod-subpath-test-dynamicpv-k85c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.670469ms
Aug  8 16:27:57.519: INFO: Pod "pod-subpath-test-dynamicpv-k85c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022892392s
Aug  8 16:27:59.524: INFO: Pod "pod-subpath-test-dynamicpv-k85c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028276998s
Aug  8 16:28:01.529: INFO: Pod "pod-subpath-test-dynamicpv-k85c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033623543s
Aug  8 16:28:03.534: INFO: Pod "pod-subpath-test-dynamicpv-k85c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038397459s
Aug  8 16:28:05.538: INFO: Pod "pod-subpath-test-dynamicpv-k85c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.042442602s
... skipping 41 lines ...
Aug  8 16:29:35.212: INFO: Pod "pod-subpath-test-dynamicpv-k85c": Phase="Running", Reason="", readiness=true. Elapsed: 1m39.715930264s
Aug  8 16:29:37.418: INFO: Pod "pod-subpath-test-dynamicpv-k85c": Phase="Running", Reason="", readiness=true. Elapsed: 1m41.922055671s
Aug  8 16:29:40.404: INFO: Pod "pod-subpath-test-dynamicpv-k85c": Phase="Running", Reason="", readiness=true. Elapsed: 1m44.908809216s
Aug  8 16:29:43.958: INFO: Pod "pod-subpath-test-dynamicpv-k85c": Phase="Running", Reason="", readiness=true. Elapsed: 1m48.461966938s
Aug  8 16:29:47.396: INFO: Pod "pod-subpath-test-dynamicpv-k85c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m51.900026561s
STEP: Saw pod success
Aug  8 16:29:47.396: INFO: Pod "pod-subpath-test-dynamicpv-k85c" satisfied condition "Succeeded or Failed"
Aug  8 16:29:47.784: INFO: Trying to get logs from node csi-prow-worker2 pod pod-subpath-test-dynamicpv-k85c container test-container-subpath-dynamicpv-k85c: <nil>
STEP: delete the pod
Aug  8 16:29:49.482: INFO: Waiting for pod pod-subpath-test-dynamicpv-k85c to disappear
Aug  8 16:29:49.749: INFO: Pod pod-subpath-test-dynamicpv-k85c no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-k85c
Aug  8 16:29:49.749: INFO: Deleting pod "pod-subpath-test-dynamicpv-k85c" in namespace "provisioning-7616"
... skipping 16 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support file as subpath [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":15,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  8 16:29:55.881: INFO: Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping
... skipping 44 lines ...

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:269
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] volumeMode 
  should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296

[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  8 16:27:53.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volumemode
W0808 16:27:55.061707   64610 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Aug  8 16:27:55.061: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296
Aug  8 16:27:55.086: INFO: Creating resource for dynamic PV
Aug  8 16:27:55.086: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(hostpath.csi.k8s.io) supported size:{ 1Mi} 
STEP: creating a StorageClass volumemode-7032-e2e-scpxtht
STEP: creating a claim
Aug  8 16:27:55.128: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iotpzht] to have phase Bound
Aug  8 16:27:55.141: INFO: PersistentVolumeClaim hostpath.csi.k8s.iotpzht found but phase is Pending instead of Bound.
Aug  8 16:27:57.146: INFO: PersistentVolumeClaim hostpath.csi.k8s.iotpzht found and phase=Bound (2.018157473s)
STEP: Creating pod
STEP: Waiting for the pod to fail
Aug  8 16:28:35.188: INFO: Deleting pod "pod-fe5cef86-592e-4d97-b875-9018b008e5bf" in namespace "volumemode-7032"
Aug  8 16:28:35.195: INFO: Wait up to 5m0s for pod "pod-fe5cef86-592e-4d97-b875-9018b008e5bf" to be fully deleted
STEP: Deleting pvc
Aug  8 16:29:59.211: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.iotpzht"
Aug  8 16:29:59.219: INFO: Waiting up to 5m0s for PersistentVolume pvc-ac2b641c-861a-4e03-a34d-a8ed7edbcdc9 to get deleted
Aug  8 16:29:59.223: INFO: PersistentVolume pvc-ac2b641c-861a-4e03-a34d-a8ed7edbcdc9 found and phase=Bound (3.541343ms)
... skipping 7 lines ...

• [SLOW TEST:130.294 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to use a volume in a pod with mismatched mode [Slow]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]","total":-1,"completed":1,"skipped":216,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] volumes 
  should allow exec of files on the volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
... skipping 18 lines ...
Aug  8 16:27:56.777: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iogtrbw] to have phase Bound
Aug  8 16:27:56.785: INFO: PersistentVolumeClaim hostpath.csi.k8s.iogtrbw found but phase is Pending instead of Bound.
Aug  8 16:27:58.791: INFO: PersistentVolumeClaim hostpath.csi.k8s.iogtrbw found but phase is Pending instead of Bound.
Aug  8 16:28:00.796: INFO: PersistentVolumeClaim hostpath.csi.k8s.iogtrbw found and phase=Bound (4.018260773s)
STEP: Creating pod exec-volume-test-dynamicpv-s2qx
STEP: Creating a pod to test exec-volume-test
Aug  8 16:28:00.815: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-s2qx" in namespace "volume-2990" to be "Succeeded or Failed"
Aug  8 16:28:00.822: INFO: Pod "exec-volume-test-dynamicpv-s2qx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.810771ms
Aug  8 16:28:02.830: INFO: Pod "exec-volume-test-dynamicpv-s2qx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014192624s
Aug  8 16:28:04.834: INFO: Pod "exec-volume-test-dynamicpv-s2qx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018099696s
Aug  8 16:28:06.839: INFO: Pod "exec-volume-test-dynamicpv-s2qx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023799371s
Aug  8 16:28:08.846: INFO: Pod "exec-volume-test-dynamicpv-s2qx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.030873438s
Aug  8 16:28:10.858: INFO: Pod "exec-volume-test-dynamicpv-s2qx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.042999855s
... skipping 45 lines ...
Aug  8 16:29:51.070: INFO: Pod "exec-volume-test-dynamicpv-s2qx": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.254476245s
Aug  8 16:29:53.252: INFO: Pod "exec-volume-test-dynamicpv-s2qx": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.436895643s
Aug  8 16:29:55.260: INFO: Pod "exec-volume-test-dynamicpv-s2qx": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.444674401s
Aug  8 16:29:57.266: INFO: Pod "exec-volume-test-dynamicpv-s2qx": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.450280601s
Aug  8 16:29:59.269: INFO: Pod "exec-volume-test-dynamicpv-s2qx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m58.453360893s
STEP: Saw pod success
Aug  8 16:29:59.269: INFO: Pod "exec-volume-test-dynamicpv-s2qx" satisfied condition "Succeeded or Failed"
Aug  8 16:29:59.271: INFO: Trying to get logs from node csi-prow-worker2 pod exec-volume-test-dynamicpv-s2qx container exec-container-dynamicpv-s2qx: <nil>
STEP: delete the pod
Aug  8 16:29:59.286: INFO: Waiting for pod exec-volume-test-dynamicpv-s2qx to disappear
Aug  8 16:29:59.289: INFO: Pod exec-volume-test-dynamicpv-s2qx no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-s2qx
Aug  8 16:29:59.289: INFO: Deleting pod "exec-volume-test-dynamicpv-s2qx" in namespace "volume-2990"
... skipping 16 lines ...
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should allow exec of files on the volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
S
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":228,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  8 16:30:04.789: INFO: Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping
... skipping 147 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should create read-only inline ephemeral volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":1,"skipped":472,"failed":0}
Aug  8 16:30:11.436: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should support existing single file [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
... skipping 17 lines ...
Aug  8 16:27:58.514: INFO: PersistentVolumeClaim hostpath.csi.k8s.io2htvq found but phase is Pending instead of Bound.
Aug  8 16:28:00.521: INFO: PersistentVolumeClaim hostpath.csi.k8s.io2htvq found but phase is Pending instead of Bound.
Aug  8 16:28:02.524: INFO: PersistentVolumeClaim hostpath.csi.k8s.io2htvq found but phase is Pending instead of Bound.
Aug  8 16:28:04.529: INFO: PersistentVolumeClaim hostpath.csi.k8s.io2htvq found and phase=Bound (6.020261595s)
STEP: Creating pod pod-subpath-test-dynamicpv-22hm
STEP: Creating a pod to test subpath
Aug  8 16:28:04.542: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-22hm" in namespace "provisioning-4676" to be "Succeeded or Failed"
Aug  8 16:28:04.545: INFO: Pod "pod-subpath-test-dynamicpv-22hm": Phase="Pending", Reason="", readiness=false. Elapsed: 3.341012ms
Aug  8 16:28:06.551: INFO: Pod "pod-subpath-test-dynamicpv-22hm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008603443s
Aug  8 16:28:08.555: INFO: Pod "pod-subpath-test-dynamicpv-22hm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012606732s
Aug  8 16:28:10.560: INFO: Pod "pod-subpath-test-dynamicpv-22hm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018052853s
Aug  8 16:28:12.565: INFO: Pod "pod-subpath-test-dynamicpv-22hm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023033861s
Aug  8 16:28:14.570: INFO: Pod "pod-subpath-test-dynamicpv-22hm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.028181507s
... skipping 48 lines ...
Aug  8 16:30:01.638: INFO: Pod "pod-subpath-test-dynamicpv-22hm": Phase="Pending", Reason="", readiness=false. Elapsed: 1m57.096288315s
Aug  8 16:30:03.642: INFO: Pod "pod-subpath-test-dynamicpv-22hm": Phase="Pending", Reason="", readiness=false. Elapsed: 1m59.100210672s
Aug  8 16:30:05.703: INFO: Pod "pod-subpath-test-dynamicpv-22hm": Phase="Pending", Reason="", readiness=false. Elapsed: 2m1.161137792s
Aug  8 16:30:07.820: INFO: Pod "pod-subpath-test-dynamicpv-22hm": Phase="Pending", Reason="", readiness=false. Elapsed: 2m3.277959926s
Aug  8 16:30:09.823: INFO: Pod "pod-subpath-test-dynamicpv-22hm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m5.281271252s
STEP: Saw pod success
Aug  8 16:30:09.823: INFO: Pod "pod-subpath-test-dynamicpv-22hm" satisfied condition "Succeeded or Failed"
Aug  8 16:30:09.826: INFO: Trying to get logs from node csi-prow-worker2 pod pod-subpath-test-dynamicpv-22hm container test-container-subpath-dynamicpv-22hm: <nil>
STEP: delete the pod
Aug  8 16:30:09.847: INFO: Waiting for pod pod-subpath-test-dynamicpv-22hm to disappear
Aug  8 16:30:09.853: INFO: Pod pod-subpath-test-dynamicpv-22hm no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-22hm
Aug  8 16:30:09.853: INFO: Deleting pod "pod-subpath-test-dynamicpv-22hm" in namespace "provisioning-4676"
... skipping 16 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support existing single file [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":1,"skipped":367,"failed":0}
Aug  8 16:30:14.896: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should support readOnly file specified in the volumeMount [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
... skipping 18 lines ...
Aug  8 16:27:55.312: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io2vksm] to have phase Bound
Aug  8 16:27:55.324: INFO: PersistentVolumeClaim hostpath.csi.k8s.io2vksm found but phase is Pending instead of Bound.
Aug  8 16:27:57.327: INFO: PersistentVolumeClaim hostpath.csi.k8s.io2vksm found but phase is Pending instead of Bound.
Aug  8 16:27:59.331: INFO: PersistentVolumeClaim hostpath.csi.k8s.io2vksm found and phase=Bound (4.01703392s)
STEP: Creating pod pod-subpath-test-dynamicpv-kvs6
STEP: Creating a pod to test subpath
Aug  8 16:27:59.346: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-kvs6" in namespace "provisioning-1669" to be "Succeeded or Failed"
Aug  8 16:27:59.350: INFO: Pod "pod-subpath-test-dynamicpv-kvs6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243295ms
Aug  8 16:28:01.354: INFO: Pod "pod-subpath-test-dynamicpv-kvs6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008164285s
Aug  8 16:28:03.359: INFO: Pod "pod-subpath-test-dynamicpv-kvs6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013545909s
Aug  8 16:28:05.364: INFO: Pod "pod-subpath-test-dynamicpv-kvs6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018455338s
Aug  8 16:28:07.369: INFO: Pod "pod-subpath-test-dynamicpv-kvs6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023208383s
Aug  8 16:28:09.512: INFO: Pod "pod-subpath-test-dynamicpv-kvs6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.166330805s
... skipping 54 lines ...
Aug  8 16:30:06.118: INFO: Pod "pod-subpath-test-dynamicpv-kvs6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.772536422s
Aug  8 16:30:08.391: INFO: Pod "pod-subpath-test-dynamicpv-kvs6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m9.045090161s
Aug  8 16:30:10.395: INFO: Pod "pod-subpath-test-dynamicpv-kvs6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m11.049186116s
Aug  8 16:30:12.399: INFO: Pod "pod-subpath-test-dynamicpv-kvs6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m13.053312517s
Aug  8 16:30:14.404: INFO: Pod "pod-subpath-test-dynamicpv-kvs6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m15.058566709s
STEP: Saw pod success
Aug  8 16:30:14.404: INFO: Pod "pod-subpath-test-dynamicpv-kvs6" satisfied condition "Succeeded or Failed"
Aug  8 16:30:14.408: INFO: Trying to get logs from node csi-prow-worker2 pod pod-subpath-test-dynamicpv-kvs6 container test-container-subpath-dynamicpv-kvs6: <nil>
STEP: delete the pod
Aug  8 16:30:14.698: INFO: Waiting for pod pod-subpath-test-dynamicpv-kvs6 to disappear
Aug  8 16:30:14.871: INFO: Pod pod-subpath-test-dynamicpv-kvs6 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-kvs6
Aug  8 16:30:14.871: INFO: Deleting pod "pod-subpath-test-dynamicpv-kvs6" in namespace "provisioning-1669"
... skipping 16 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support readOnly file specified in the volumeMount [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":36,"failed":0}
Aug  8 16:30:19.929: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if subpath file is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  8 16:27:54.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
W0808 16:27:57.195890   64823 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Aug  8 16:27:57.195: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath file is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256
Aug  8 16:27:57.201: INFO: Creating resource for dynamic PV
Aug  8 16:27:57.201: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(hostpath.csi.k8s.io) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-4392-e2e-sclqxx9
STEP: creating a claim
Aug  8 16:27:57.206: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 16:27:57.213: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iovsckk] to have phase Bound
Aug  8 16:27:57.216: INFO: PersistentVolumeClaim hostpath.csi.k8s.iovsckk found but phase is Pending instead of Bound.
Aug  8 16:27:59.238: INFO: PersistentVolumeClaim hostpath.csi.k8s.iovsckk found but phase is Pending instead of Bound.
Aug  8 16:28:01.244: INFO: PersistentVolumeClaim hostpath.csi.k8s.iovsckk found and phase=Bound (4.030679108s)
STEP: Creating pod pod-subpath-test-dynamicpv-2v2t
STEP: Checking for subpath error in container status
Aug  8 16:30:07.607: INFO: Deleting pod "pod-subpath-test-dynamicpv-2v2t" in namespace "provisioning-4392"
Aug  8 16:30:07.859: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-2v2t" to be fully deleted
STEP: Deleting pod
Aug  8 16:30:37.913: INFO: Deleting pod "pod-subpath-test-dynamicpv-2v2t" in namespace "provisioning-4392"
STEP: Deleting pvc
Aug  8 16:30:37.921: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.iovsckk"
... skipping 9 lines ...

• [SLOW TEST:168.265 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath file is outside the volume [Slow][LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]","total":-1,"completed":1,"skipped":175,"failed":0}
Aug  8 16:30:42.959: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral 
  should support multiple inline ephemeral volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:211
... skipping 40 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support multiple inline ephemeral volumes
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:211
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":1,"skipped":103,"failed":0}
Aug  8 16:30:44.535: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should support existing directory
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
... skipping 17 lines ...
Aug  8 16:27:58.661: INFO: PersistentVolumeClaim hostpath.csi.k8s.io9dw8p found but phase is Pending instead of Bound.
Aug  8 16:28:00.667: INFO: PersistentVolumeClaim hostpath.csi.k8s.io9dw8p found but phase is Pending instead of Bound.
Aug  8 16:28:02.672: INFO: PersistentVolumeClaim hostpath.csi.k8s.io9dw8p found but phase is Pending instead of Bound.
Aug  8 16:28:04.677: INFO: PersistentVolumeClaim hostpath.csi.k8s.io9dw8p found and phase=Bound (6.019234968s)
STEP: Creating pod pod-subpath-test-dynamicpv-4w5q
STEP: Creating a pod to test subpath
Aug  8 16:28:04.699: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-4w5q" in namespace "provisioning-3912" to be "Succeeded or Failed"
Aug  8 16:28:04.702: INFO: Pod "pod-subpath-test-dynamicpv-4w5q": Phase="Pending", Reason="", readiness=false. Elapsed: 3.248448ms
Aug  8 16:28:06.707: INFO: Pod "pod-subpath-test-dynamicpv-4w5q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008260706s
Aug  8 16:28:08.711: INFO: Pod "pod-subpath-test-dynamicpv-4w5q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012352251s
Aug  8 16:28:10.720: INFO: Pod "pod-subpath-test-dynamicpv-4w5q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020959655s
Aug  8 16:28:12.992: INFO: Pod "pod-subpath-test-dynamicpv-4w5q": Phase="Pending", Reason="", readiness=false. Elapsed: 8.293384065s
Aug  8 16:28:15.051: INFO: Pod "pod-subpath-test-dynamicpv-4w5q": Phase="Pending", Reason="", readiness=false. Elapsed: 10.351995047s
... skipping 64 lines ...
Aug  8 16:30:33.530: INFO: Pod "pod-subpath-test-dynamicpv-4w5q": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.83072759s
Aug  8 16:30:35.535: INFO: Pod "pod-subpath-test-dynamicpv-4w5q": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.835890743s
Aug  8 16:30:37.539: INFO: Pod "pod-subpath-test-dynamicpv-4w5q": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.840284386s
Aug  8 16:30:39.545: INFO: Pod "pod-subpath-test-dynamicpv-4w5q": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.845944717s
Aug  8 16:30:41.549: INFO: Pod "pod-subpath-test-dynamicpv-4w5q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m36.850656127s
STEP: Saw pod success
Aug  8 16:30:41.550: INFO: Pod "pod-subpath-test-dynamicpv-4w5q" satisfied condition "Succeeded or Failed"
Aug  8 16:30:41.553: INFO: Trying to get logs from node csi-prow-worker2 pod pod-subpath-test-dynamicpv-4w5q container test-container-volume-dynamicpv-4w5q: <nil>
STEP: delete the pod
Aug  8 16:30:41.570: INFO: Waiting for pod pod-subpath-test-dynamicpv-4w5q to disappear
Aug  8 16:30:41.574: INFO: Pod pod-subpath-test-dynamicpv-4w5q no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-4w5q
Aug  8 16:30:41.575: INFO: Deleting pod "pod-subpath-test-dynamicpv-4w5q" in namespace "provisioning-3912"
... skipping 16 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support existing directory
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":1,"skipped":115,"failed":0}
Aug  8 16:30:46.612: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral 
  should create read/write inline ephemeral volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:161
... skipping 38 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should create read/write inline ephemeral volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:161
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":1,"skipped":76,"failed":0}
Aug  8 16:30:48.616: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should verify container cannot write to subpath readonly volumes [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:422
... skipping 17 lines ...
Aug  8 16:27:55.877: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 16:27:55.885: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iov8kwp] to have phase Bound
Aug  8 16:27:55.902: INFO: PersistentVolumeClaim hostpath.csi.k8s.iov8kwp found but phase is Pending instead of Bound.
Aug  8 16:27:57.906: INFO: PersistentVolumeClaim hostpath.csi.k8s.iov8kwp found but phase is Pending instead of Bound.
Aug  8 16:27:59.915: INFO: PersistentVolumeClaim hostpath.csi.k8s.iov8kwp found and phase=Bound (4.029781311s)
STEP: Creating pod to format volume volume-prep-provisioning-3320
Aug  8 16:27:59.933: INFO: Waiting up to 5m0s for pod "volume-prep-provisioning-3320" in namespace "provisioning-3320" to be "Succeeded or Failed"
Aug  8 16:27:59.935: INFO: Pod "volume-prep-provisioning-3320": Phase="Pending", Reason="", readiness=false. Elapsed: 2.397729ms
Aug  8 16:28:01.945: INFO: Pod "volume-prep-provisioning-3320": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011599851s
Aug  8 16:28:03.949: INFO: Pod "volume-prep-provisioning-3320": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016306742s
Aug  8 16:28:05.956: INFO: Pod "volume-prep-provisioning-3320": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022515638s
Aug  8 16:28:07.960: INFO: Pod "volume-prep-provisioning-3320": Phase="Pending", Reason="", readiness=false. Elapsed: 8.026526383s
Aug  8 16:28:09.963: INFO: Pod "volume-prep-provisioning-3320": Phase="Pending", Reason="", readiness=false. Elapsed: 10.030266305s
... skipping 7 lines ...
Aug  8 16:28:25.997: INFO: Pod "volume-prep-provisioning-3320": Phase="Pending", Reason="", readiness=false. Elapsed: 26.063635224s
Aug  8 16:28:28.001: INFO: Pod "volume-prep-provisioning-3320": Phase="Pending", Reason="", readiness=false. Elapsed: 28.06838836s
Aug  8 16:28:30.005: INFO: Pod "volume-prep-provisioning-3320": Phase="Pending", Reason="", readiness=false. Elapsed: 30.072090989s
Aug  8 16:28:32.014: INFO: Pod "volume-prep-provisioning-3320": Phase="Pending", Reason="", readiness=false. Elapsed: 32.080990916s
Aug  8 16:28:34.026: INFO: Pod "volume-prep-provisioning-3320": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.092568666s
STEP: Saw pod success
Aug  8 16:28:34.026: INFO: Pod "volume-prep-provisioning-3320" satisfied condition "Succeeded or Failed"
Aug  8 16:28:34.026: INFO: Deleting pod "volume-prep-provisioning-3320" in namespace "provisioning-3320"
Aug  8 16:28:34.054: INFO: Wait up to 5m0s for pod "volume-prep-provisioning-3320" to be fully deleted
STEP: Creating pod pod-subpath-test-dynamicpv-tz2p
STEP: Checking for subpath error in container status
Aug  8 16:30:54.214: INFO: Deleting pod "pod-subpath-test-dynamicpv-tz2p" in namespace "provisioning-3320"
Aug  8 16:30:54.223: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-tz2p" to be fully deleted
STEP: Deleting pod
Aug  8 16:30:54.226: INFO: Deleting pod "pod-subpath-test-dynamicpv-tz2p" in namespace "provisioning-3320"
STEP: Deleting pvc
Aug  8 16:30:54.235: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.iov8kwp"
... skipping 12 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should verify container cannot write to subpath readonly volumes [Slow]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:422
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]","total":-1,"completed":1,"skipped":2,"failed":0}
Aug  8 16:30:59.262: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should support non-existent path
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
... skipping 18 lines ...
Aug  8 16:27:55.708: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io2jrhq] to have phase Bound
Aug  8 16:27:55.712: INFO: PersistentVolumeClaim hostpath.csi.k8s.io2jrhq found but phase is Pending instead of Bound.
Aug  8 16:27:57.726: INFO: PersistentVolumeClaim hostpath.csi.k8s.io2jrhq found but phase is Pending instead of Bound.
Aug  8 16:27:59.729: INFO: PersistentVolumeClaim hostpath.csi.k8s.io2jrhq found and phase=Bound (4.020916611s)
STEP: Creating pod pod-subpath-test-dynamicpv-69s2
STEP: Creating a pod to test subpath
Aug  8 16:27:59.740: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-69s2" in namespace "provisioning-35" to be "Succeeded or Failed"
Aug  8 16:27:59.744: INFO: Pod "pod-subpath-test-dynamicpv-69s2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.868585ms
Aug  8 16:28:01.747: INFO: Pod "pod-subpath-test-dynamicpv-69s2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006440903s
Aug  8 16:28:03.752: INFO: Pod "pod-subpath-test-dynamicpv-69s2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010695737s
Aug  8 16:28:05.756: INFO: Pod "pod-subpath-test-dynamicpv-69s2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014769181s
Aug  8 16:28:07.761: INFO: Pod "pod-subpath-test-dynamicpv-69s2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019575862s
Aug  8 16:28:09.764: INFO: Pod "pod-subpath-test-dynamicpv-69s2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023507931s
... skipping 73 lines ...
Aug  8 16:30:47.912: INFO: Pod "pod-subpath-test-dynamicpv-69s2": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.171094695s
Aug  8 16:30:49.916: INFO: Pod "pod-subpath-test-dynamicpv-69s2": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.175537196s
Aug  8 16:30:51.920: INFO: Pod "pod-subpath-test-dynamicpv-69s2": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.17928035s
Aug  8 16:30:53.924: INFO: Pod "pod-subpath-test-dynamicpv-69s2": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.183382463s
Aug  8 16:30:55.929: INFO: Pod "pod-subpath-test-dynamicpv-69s2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m56.188437789s
STEP: Saw pod success
Aug  8 16:30:55.929: INFO: Pod "pod-subpath-test-dynamicpv-69s2" satisfied condition "Succeeded or Failed"
Aug  8 16:30:55.933: INFO: Trying to get logs from node csi-prow-worker2 pod pod-subpath-test-dynamicpv-69s2 container test-container-volume-dynamicpv-69s2: <nil>
STEP: delete the pod
Aug  8 16:30:55.950: INFO: Waiting for pod pod-subpath-test-dynamicpv-69s2 to disappear
Aug  8 16:30:55.952: INFO: Pod pod-subpath-test-dynamicpv-69s2 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-69s2
Aug  8 16:30:55.952: INFO: Deleting pod "pod-subpath-test-dynamicpv-69s2" in namespace "provisioning-35"
... skipping 16 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support non-existent path
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":87,"failed":0}
Aug  8 16:31:00.987: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if subpath directory is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  8 16:27:54.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
W0808 16:27:56.592755   64901 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Aug  8 16:27:56.592: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath directory is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240
Aug  8 16:27:56.596: INFO: Creating resource for dynamic PV
Aug  8 16:27:56.596: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(hostpath.csi.k8s.io) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-8434-e2e-sc5xxm5
STEP: creating a claim
Aug  8 16:27:56.604: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 16:27:56.614: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iobwlpc] to have phase Bound
Aug  8 16:27:56.621: INFO: PersistentVolumeClaim hostpath.csi.k8s.iobwlpc found but phase is Pending instead of Bound.
Aug  8 16:27:58.624: INFO: PersistentVolumeClaim hostpath.csi.k8s.iobwlpc found but phase is Pending instead of Bound.
Aug  8 16:28:00.628: INFO: PersistentVolumeClaim hostpath.csi.k8s.iobwlpc found and phase=Bound (4.013119725s)
STEP: Creating pod pod-subpath-test-dynamicpv-gn2x
STEP: Checking for subpath error in container status
Aug  8 16:30:36.651: INFO: Deleting pod "pod-subpath-test-dynamicpv-gn2x" in namespace "provisioning-8434"
Aug  8 16:30:36.655: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-gn2x" to be fully deleted
STEP: Deleting pod
Aug  8 16:30:58.664: INFO: Deleting pod "pod-subpath-test-dynamicpv-gn2x" in namespace "provisioning-8434"
STEP: Deleting pvc
Aug  8 16:30:58.666: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.iobwlpc"
... skipping 9 lines ...

• [SLOW TEST:189.042 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath directory is outside the volume [Slow][LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]","total":-1,"completed":1,"skipped":87,"failed":0}
Aug  8 16:31:03.703: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should support existing directories when readOnly specified in the volumeSource
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
... skipping 17 lines ...
Aug  8 16:27:54.807: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 16:27:54.902: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io45csb] to have phase Bound
Aug  8 16:27:54.921: INFO: PersistentVolumeClaim hostpath.csi.k8s.io45csb found but phase is Pending instead of Bound.
Aug  8 16:27:56.929: INFO: PersistentVolumeClaim hostpath.csi.k8s.io45csb found and phase=Bound (2.026157859s)
STEP: Creating pod pod-subpath-test-dynamicpv-xkhb
STEP: Creating a pod to test subpath
Aug  8 16:27:56.965: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-xkhb" in namespace "provisioning-7373" to be "Succeeded or Failed"
Aug  8 16:27:56.999: INFO: Pod "pod-subpath-test-dynamicpv-xkhb": Phase="Pending", Reason="", readiness=false. Elapsed: 33.871938ms
Aug  8 16:27:59.002: INFO: Pod "pod-subpath-test-dynamicpv-xkhb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037139358s
Aug  8 16:28:01.006: INFO: Pod "pod-subpath-test-dynamicpv-xkhb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040701091s
Aug  8 16:28:03.011: INFO: Pod "pod-subpath-test-dynamicpv-xkhb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045402099s
Aug  8 16:28:05.015: INFO: Pod "pod-subpath-test-dynamicpv-xkhb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049902106s
Aug  8 16:28:07.019: INFO: Pod "pod-subpath-test-dynamicpv-xkhb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.053948597s
... skipping 53 lines ...
Aug  8 16:30:03.909: INFO: Pod "pod-subpath-test-dynamicpv-xkhb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.944130868s
Aug  8 16:30:05.960: INFO: Pod "pod-subpath-test-dynamicpv-xkhb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.994375187s
Aug  8 16:30:08.256: INFO: Pod "pod-subpath-test-dynamicpv-xkhb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m11.291009678s
Aug  8 16:30:10.260: INFO: Pod "pod-subpath-test-dynamicpv-xkhb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m13.294370263s
Aug  8 16:30:12.264: INFO: Pod "pod-subpath-test-dynamicpv-xkhb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m15.298467171s
STEP: Saw pod success
Aug  8 16:30:12.264: INFO: Pod "pod-subpath-test-dynamicpv-xkhb" satisfied condition "Succeeded or Failed"
Aug  8 16:30:12.267: INFO: Trying to get logs from node csi-prow-worker2 pod pod-subpath-test-dynamicpv-xkhb container test-container-subpath-dynamicpv-xkhb: <nil>
STEP: delete the pod
Aug  8 16:30:12.289: INFO: Waiting for pod pod-subpath-test-dynamicpv-xkhb to disappear
Aug  8 16:30:12.292: INFO: Pod pod-subpath-test-dynamicpv-xkhb no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-xkhb
Aug  8 16:30:12.292: INFO: Deleting pod "pod-subpath-test-dynamicpv-xkhb" in namespace "provisioning-7373"
STEP: Creating pod pod-subpath-test-dynamicpv-xkhb
STEP: Creating a pod to test subpath
Aug  8 16:30:12.305: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-xkhb" in namespace "provisioning-7373" to be "Succeeded or Failed"
Aug  8 16:30:12.309: INFO: Pod "pod-subpath-test-dynamicpv-xkhb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.709719ms
Aug  8 16:30:14.315: INFO: Pod "pod-subpath-test-dynamicpv-xkhb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009155311s
Aug  8 16:30:16.319: INFO: Pod "pod-subpath-test-dynamicpv-xkhb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013575852s
Aug  8 16:30:18.323: INFO: Pod "pod-subpath-test-dynamicpv-xkhb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017756456s
Aug  8 16:30:20.327: INFO: Pod "pod-subpath-test-dynamicpv-xkhb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021643932s
Aug  8 16:30:22.331: INFO: Pod "pod-subpath-test-dynamicpv-xkhb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025190415s
... skipping 17 lines ...
Aug  8 16:30:58.406: INFO: Pod "pod-subpath-test-dynamicpv-xkhb": Phase="Pending", Reason="", readiness=false. Elapsed: 46.100204451s
Aug  8 16:31:00.410: INFO: Pod "pod-subpath-test-dynamicpv-xkhb": Phase="Pending", Reason="", readiness=false. Elapsed: 48.103983262s
Aug  8 16:31:02.417: INFO: Pod "pod-subpath-test-dynamicpv-xkhb": Phase="Pending", Reason="", readiness=false. Elapsed: 50.111617965s
Aug  8 16:31:04.421: INFO: Pod "pod-subpath-test-dynamicpv-xkhb": Phase="Pending", Reason="", readiness=false. Elapsed: 52.115167584s
Aug  8 16:31:06.424: INFO: Pod "pod-subpath-test-dynamicpv-xkhb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 54.118872185s
STEP: Saw pod success
Aug  8 16:31:06.424: INFO: Pod "pod-subpath-test-dynamicpv-xkhb" satisfied condition "Succeeded or Failed"
Aug  8 16:31:06.427: INFO: Trying to get logs from node csi-prow-worker2 pod pod-subpath-test-dynamicpv-xkhb container test-container-subpath-dynamicpv-xkhb: <nil>
STEP: delete the pod
Aug  8 16:31:06.439: INFO: Waiting for pod pod-subpath-test-dynamicpv-xkhb to disappear
Aug  8 16:31:06.442: INFO: Pod pod-subpath-test-dynamicpv-xkhb no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-xkhb
Aug  8 16:31:06.442: INFO: Deleting pod "pod-subpath-test-dynamicpv-xkhb" in namespace "provisioning-7373"
... skipping 16 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support existing directories when readOnly specified in the volumeSource
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":1,"skipped":76,"failed":0}
Aug  8 16:31:11.477: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  8 16:27:53.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
W0808 16:27:54.986845   64686 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Aug  8 16:27:54.986: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267
Aug  8 16:27:54.996: INFO: Creating resource for dynamic PV
Aug  8 16:27:54.996: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(hostpath.csi.k8s.io) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-4541-e2e-scph4jn
STEP: creating a claim
Aug  8 16:27:55.010: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 16:27:55.057: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io2zqtx] to have phase Bound
Aug  8 16:27:55.080: INFO: PersistentVolumeClaim hostpath.csi.k8s.io2zqtx found but phase is Pending instead of Bound.
Aug  8 16:27:57.084: INFO: PersistentVolumeClaim hostpath.csi.k8s.io2zqtx found and phase=Bound (2.026187279s)
STEP: Creating pod pod-subpath-test-dynamicpv-7lv7
STEP: Checking for subpath error in container status
Aug  8 16:30:27.107: INFO: Deleting pod "pod-subpath-test-dynamicpv-7lv7" in namespace "provisioning-4541"
Aug  8 16:30:27.112: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-7lv7" to be fully deleted
STEP: Deleting pod
Aug  8 16:31:15.123: INFO: Deleting pod "pod-subpath-test-dynamicpv-7lv7" in namespace "provisioning-4541"
STEP: Deleting pvc
Aug  8 16:31:15.125: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.io2zqtx"
... skipping 9 lines ...

• [SLOW TEST:206.706 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]","total":-1,"completed":1,"skipped":6,"failed":0}
Aug  8 16:31:20.156: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral 
  should create read-only inline ephemeral volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
... skipping 38 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should create read-only inline ephemeral volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":1,"skipped":166,"failed":0}
Aug  8 16:31:29.178: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode 
  should not mount / map unused volumes in a pod [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
... skipping 50 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should not mount / map unused volumes in a pod [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":126,"failed":0}
Aug  8 16:31:30.470: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral 
  should create read-only inline ephemeral volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
... skipping 38 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should create read-only inline ephemeral volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":1,"skipped":19,"failed":0}
Aug  8 16:31:35.323: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should be able to unmount after the subpath directory is deleted [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
... skipping 42 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should be able to unmount after the subpath directory is deleted [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":2,"skipped":117,"failed":0}
Aug  8 16:31:35.570: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral 
  should support multiple inline ephemeral volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:211
... skipping 26 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support multiple inline ephemeral volumes
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:211
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":3,"skipped":577,"failed":0}
Aug  8 16:31:37.646: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral 
  should create read/write inline ephemeral volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:161
... skipping 36 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should create read/write inline ephemeral volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:161
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":2,"skipped":125,"failed":0}
Aug  8 16:31:40.330: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  8 16:29:53.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278
Aug  8 16:29:54.095: INFO: Creating resource for dynamic PV
Aug  8 16:29:54.095: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(hostpath.csi.k8s.io) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-8479-e2e-sct6c2n
STEP: creating a claim
Aug  8 16:29:54.106: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 16:29:54.126: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iowhhwd] to have phase Bound
Aug  8 16:29:54.134: INFO: PersistentVolumeClaim hostpath.csi.k8s.iowhhwd found but phase is Pending instead of Bound.
Aug  8 16:29:56.145: INFO: PersistentVolumeClaim hostpath.csi.k8s.iowhhwd found and phase=Bound (2.019627977s)
STEP: Creating pod pod-subpath-test-dynamicpv-6ms9
STEP: Checking for subpath error in container status
Aug  8 16:31:32.182: INFO: Deleting pod "pod-subpath-test-dynamicpv-6ms9" in namespace "provisioning-8479"
Aug  8 16:31:32.187: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-6ms9" to be fully deleted
STEP: Deleting pod
Aug  8 16:31:38.197: INFO: Deleting pod "pod-subpath-test-dynamicpv-6ms9" in namespace "provisioning-8479"
STEP: Deleting pvc
Aug  8 16:31:38.200: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.iowhhwd"
... skipping 9 lines ...

• [SLOW TEST:109.297 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]","total":-1,"completed":2,"skipped":407,"failed":0}
Aug  8 16:31:43.234: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] 
  should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:214
... skipping 123 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:214
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]","total":-1,"completed":1,"skipped":185,"failed":0}
Aug  8 16:31:46.046: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand 
  should resize volume when PVC is edited while pod is using it
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
... skipping 43 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should resize volume when PVC is edited while pod is using it
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":1,"skipped":36,"failed":0}
Aug  8 16:31:48.084: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] 
  should concurrently access the single volume from pods on the same node [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:312
... skipping 200 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should concurrently access the single volume from pods on the same node [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:312
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node [LinuxOnly]","total":-1,"completed":1,"skipped":134,"failed":0}
Aug  8 16:31:49.238: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand 
  Verify if offline PVC expansion works
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
... skipping 52 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    Verify if offline PVC expansion works
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":1,"skipped":6,"failed":0}
Aug  8 16:31:49.408: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] 
  should concurrently access the single read-only volume from pods on the same node
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:337
... skipping 59 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should concurrently access the single read-only volume from pods on the same node
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:337
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":-1,"completed":1,"skipped":306,"failed":0}
Aug  8 16:31:51.696: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand 
  should resize volume when PVC is edited while pod is using it
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
... skipping 41 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should resize volume when PVC is edited while pod is using it
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":2,"skipped":118,"failed":0}
Aug  8 16:31:52.889: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] volumeMode 
  should not mount / map unused volumes in a pod [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
... skipping 48 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should not mount / map unused volumes in a pod [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":269,"failed":0}
Aug  8 16:31:52.963: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral 
  should support two pods which share the same volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
... skipping 35 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support two pods which share the same volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which share the same volume","total":-1,"completed":1,"skipped":67,"failed":0}
Aug  8 16:31:53.466: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] volumeIO 
  should write files of various sizes, verify size, validate content [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_io.go:146
... skipping 43 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] volumeIO
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should write files of various sizes, verify size, validate content [Slow][LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_io.go:146
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow][LinuxOnly]","total":-1,"completed":1,"skipped":71,"failed":0}
Aug  8 16:31:54.595: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] 
  should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:134
... skipping 125 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:134
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]","total":-1,"completed":1,"skipped":76,"failed":0}
Aug  8 16:31:55.462: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] volumes 
  should store data
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
... skipping 117 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should store data
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":1,"skipped":82,"failed":0}
Aug  8 16:31:56.204: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand 
  Verify if offline PVC expansion works
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
... skipping 49 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    Verify if offline PVC expansion works
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":2,"skipped":68,"failed":0}
Aug  8 16:31:56.310: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] volumes 
  should store data
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
... skipping 132 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should store data
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":1,"skipped":229,"failed":0}
Aug  8 16:31:57.887: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] 
  should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:134
... skipping 125 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:134
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]","total":-1,"completed":1,"skipped":223,"failed":0}
Aug  8 16:32:00.253: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] 
  should concurrently access the single read-only volume from pods on the same node
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:337
... skipping 62 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should concurrently access the single read-only volume from pods on the same node
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:337
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":-1,"completed":1,"skipped":61,"failed":0}
Aug  8 16:32:05.034: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should support restarting containers using file as subpath [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:335
... skipping 65 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support restarting containers using file as subpath [Slow][LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:335
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]","total":-1,"completed":1,"skipped":87,"failed":0}
Aug  8 16:32:07.920: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral 
  should support two pods which share the same volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
... skipping 47 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support two pods which share the same volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which share the same volume","total":-1,"completed":1,"skipped":227,"failed":0}
Aug  8 16:32:29.734: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should support restarting containers using directory as subpath [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:320
... skipping 37 lines ...
Aug  8 16:30:38.586: INFO: stderr: ""
Aug  8 16:30:38.587: INFO: stdout: ""
Aug  8 16:30:38.587: INFO: Pod exec output: 
STEP: Waiting for container to stop restarting
Aug  8 16:31:32.595: INFO: Container has restart count: 3
Aug  8 16:31:40.593: INFO: Container has restart count: 4
Aug  8 16:32:38.606: FAIL: while waiting for container to stabilize
Unexpected error:
    <*errors.errorString | 0xc0002c4250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 37 lines ...
Aug  8 16:32:53.660: INFO: At 2022-08-08 16:28:33 +0000 UTC - event for pod-subpath-test-dynamicpv-gg2s: {kubelet csi-prow-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine
Aug  8 16:32:53.660: INFO: At 2022-08-08 16:28:33 +0000 UTC - event for pod-subpath-test-dynamicpv-gg2s: {kubelet csi-prow-worker2} Created: Created container test-container-subpath-dynamicpv-gg2s
Aug  8 16:32:53.660: INFO: At 2022-08-08 16:28:33 +0000 UTC - event for pod-subpath-test-dynamicpv-gg2s: {kubelet csi-prow-worker2} Started: Started container test-container-subpath-dynamicpv-gg2s
Aug  8 16:32:53.660: INFO: At 2022-08-08 16:28:33 +0000 UTC - event for pod-subpath-test-dynamicpv-gg2s: {kubelet csi-prow-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine
Aug  8 16:32:53.660: INFO: At 2022-08-08 16:28:33 +0000 UTC - event for pod-subpath-test-dynamicpv-gg2s: {kubelet csi-prow-worker2} Created: Created container test-container-volume-dynamicpv-gg2s
Aug  8 16:32:53.660: INFO: At 2022-08-08 16:28:34 +0000 UTC - event for pod-subpath-test-dynamicpv-gg2s: {kubelet csi-prow-worker2} Started: Started container test-container-volume-dynamicpv-gg2s
Aug  8 16:32:53.660: INFO: At 2022-08-08 16:30:00 +0000 UTC - event for pod-subpath-test-dynamicpv-gg2s: {kubelet csi-prow-worker2} Unhealthy: Liveness probe failed: cat: can't open '/probe-volume/probe-file': No such file or directory

Aug  8 16:32:53.660: INFO: At 2022-08-08 16:30:00 +0000 UTC - event for pod-subpath-test-dynamicpv-gg2s: {kubelet csi-prow-worker2} Killing: Container test-container-subpath-dynamicpv-gg2s failed liveness probe, will be restarted
Aug  8 16:32:53.660: INFO: At 2022-08-08 16:30:13 +0000 UTC - event for pod-subpath-test-dynamicpv-gg2s: {kubelet csi-prow-worker2} BackOff: Back-off restarting failed container
Aug  8 16:32:53.664: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Aug  8 16:32:53.664: INFO: 
Aug  8 16:32:53.667: INFO: 
Logging node info for node csi-prow-control-plane
Aug  8 16:32:53.669: INFO: Node Info: &Node{ObjectMeta:{csi-prow-control-plane    692031fd-2c19-41fb-87c7-7db260764306 3867 0 2022-08-08 16:19:13 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:csi-prow-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-08-08 16:19:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-08-08 16:19:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-08-08 16:19:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/csi-prow/csi-prow-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259975987200 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67445997568 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259975987200 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67445997568 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-08-08 16:29:43 +0000 UTC,LastTransitionTime:2022-08-08 16:19:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-08-08 16:29:43 +0000 UTC,LastTransitionTime:2022-08-08 16:19:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-08-08 16:29:43 +0000 UTC,LastTransitionTime:2022-08-08 16:19:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-08-08 16:29:43 +0000 UTC,LastTransitionTime:2022-08-08 16:19:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:csi-prow-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:13fc72b8239b4e32b1acc93e0baf0ea1,SystemUUID:7e12269d-4c46-4eba-9542-fbff7edb2a5f,BootID:38276d15-3338-44b8-8098-5bd5a32fc555,KernelVersion:5.4.0-1068-gke,OSImage:Ubuntu 21.04,ContainerRuntimeVersion:containerd://1.5.2,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:132714699,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:126834637,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:121042741,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20210326-1e038dc5],SizeBytes:53960776,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:51865396,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:21086532,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:13367922,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:12945155,},ContainerImage{Names:[k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Aug  8 16:32:53.670: INFO: 
... skipping 80 lines ...
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support restarting containers using directory as subpath [Slow] [It]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:320

    Aug  8 16:32:38.606: while waiting for container to stabilize
    Unexpected error:
        <*errors.errorString | 0xc0002c4250>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:870
------------------------------
{"msg":"FAILED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]","total":-1,"completed":0,"skipped":18,"failed":1,"failures":["External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]"]}
Aug  8 16:32:54.252: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral 
  should support two pods which share the same volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
... skipping 47 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support two pods which share the same volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which share the same volume","total":-1,"completed":1,"skipped":44,"failed":0}
Aug  8 16:32:59.635: INFO: Running AfterSuite actions on all nodes


{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node [LinuxOnly]","total":-1,"completed":1,"skipped":47,"failed":0}
Aug  8 16:31:48.958: INFO: Running AfterSuite actions on all nodes
Aug  8 16:32:59.704: INFO: Running AfterSuite actions on node 1
Aug  8 16:32:59.704: INFO: Dumping logs locally to: /logs/artifacts
Aug  8 16:32:59.705: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec ../../cluster/log-dump/log-dump.sh: no such file or directory



Summarizing 1 Failure:

[Fail] External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath [It] should support restarting containers using directory as subpath [Slow] 
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:870

Ran 48 of 5976 Specs in 309.149 seconds
FAIL! -- 47 Passed | 1 Failed | 0 Pending | 5928 Skipped


Ginkgo ran 1 suite in 5m37.015255117s
Test Suite Failed
Mon Aug  8 16:32:59 UTC 2022 go1.18 /home/prow/go/src/k8s.io/kubernetes$ go run /home/prow/go/src/github.com/kubernetes-csi/csi-driver-host-path/release-tools/filter-junit.go -t=External.Storage|CSI.mock.volume -o /logs/artifacts/junit_parallel.xml /logs/artifacts/junit_01.xml /logs/artifacts/junit_02.xml /logs/artifacts/junit_03.xml /logs/artifacts/junit_04.xml /logs/artifacts/junit_05.xml /logs/artifacts/junit_06.xml /logs/artifacts/junit_07.xml /logs/artifacts/junit_08.xml /logs/artifacts/junit_09.xml /logs/artifacts/junit_10.xml /logs/artifacts/junit_11.xml /logs/artifacts/junit_12.xml /logs/artifacts/junit_13.xml /logs/artifacts/junit_14.xml /logs/artifacts/junit_15.xml /logs/artifacts/junit_16.xml /logs/artifacts/junit_17.xml /logs/artifacts/junit_18.xml /logs/artifacts/junit_19.xml /logs/artifacts/junit_20.xml /logs/artifacts/junit_21.xml /logs/artifacts/junit_22.xml /logs/artifacts/junit_23.xml /logs/artifacts/junit_24.xml /logs/artifacts/junit_25.xml /logs/artifacts/junit_26.xml /logs/artifacts/junit_27.xml /logs/artifacts/junit_28.xml /logs/artifacts/junit_29.xml /logs/artifacts/junit_30.xml /logs/artifacts/junit_31.xml /logs/artifacts/junit_32.xml /logs/artifacts/junit_33.xml /logs/artifacts/junit_34.xml /logs/artifacts/junit_35.xml /logs/artifacts/junit_36.xml /logs/artifacts/junit_37.xml /logs/artifacts/junit_38.xml /logs/artifacts/junit_39.xml /logs/artifacts/junit_40.xml
go: updates to go.mod needed, disabled by -mod=vendor
	(Go version in go.mod is at least 1.14 and vendor directory exists.)
	to update it:
	go mod tidy
go: updates to go.mod needed, disabled by -mod=vendor
	(Go version in go.mod is at least 1.14 and vendor directory exists.)
	to update it:
	go mod tidy
WARNING: E2E parallel failed
Mon Aug  8 16:33:00 UTC 2022 go1.19 /home/prow/go/src/k8s.io/kubernetes$ env KUBECONFIG=/root/.kube/config KUBE_TEST_REPO_LIST=/home/prow/go/pkg/csiprow.3zhlDRENWi/e2e-repo-list ginkgo -v -p -nodes 40 -focus=External.Storage.*(\[Feature:VolumeSnapshotDataSource\]) -skip=\[Serial\]|\[Disruptive\] /home/prow/go/pkg/csiprow.3zhlDRENWi/e2e.test -- -report-dir /logs/artifacts -storage.testdriver=/home/prow/go/pkg/csiprow.3zhlDRENWi/test-driver.yaml
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1659976380 - Will randomize all specs
Will run 5976 specs

... skipping 412 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] provisioning
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:200
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]","total":-1,"completed":1,"skipped":125,"failed":0}
Aug  8 16:34:16.750: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] provisioning 
  should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:200
... skipping 108 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] provisioning
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:200
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]","total":-1,"completed":1,"skipped":4,"failed":0}
Aug  8 16:34:35.323: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  
  should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
... skipping 16 lines ...
STEP: creating a claim
Aug  8 16:33:27.934: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 16:33:27.950: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iottxqv] to have phase Bound
Aug  8 16:33:27.958: INFO: PersistentVolumeClaim hostpath.csi.k8s.iottxqv found but phase is Pending instead of Bound.
Aug  8 16:33:29.962: INFO: PersistentVolumeClaim hostpath.csi.k8s.iottxqv found and phase=Bound (2.012469756s)
STEP: [init] starting a pod to use the claim
Aug  8 16:33:29.975: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-khd4q" in namespace "snapshotting-5524" to be "Succeeded or Failed"
Aug  8 16:33:29.991: INFO: Pod "pvc-snapshottable-tester-khd4q": Phase="Pending", Reason="", readiness=false. Elapsed: 15.951145ms
Aug  8 16:33:31.995: INFO: Pod "pvc-snapshottable-tester-khd4q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019586326s
Aug  8 16:33:33.999: INFO: Pod "pvc-snapshottable-tester-khd4q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023956775s
Aug  8 16:33:36.002: INFO: Pod "pvc-snapshottable-tester-khd4q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027134246s
Aug  8 16:33:38.006: INFO: Pod "pvc-snapshottable-tester-khd4q": Phase="Pending", Reason="", readiness=false. Elapsed: 8.031342151s
Aug  8 16:33:40.010: INFO: Pod "pvc-snapshottable-tester-khd4q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.034646097s
STEP: Saw pod success
Aug  8 16:33:40.010: INFO: Pod "pvc-snapshottable-tester-khd4q" satisfied condition "Succeeded or Failed"
Aug  8 16:33:40.026: INFO: Pod pvc-snapshottable-tester-khd4q has the following logs: 
Aug  8 16:33:40.027: INFO: Deleting pod "pvc-snapshottable-tester-khd4q" in namespace "snapshotting-5524"
Aug  8 16:33:40.037: INFO: Wait up to 5m0s for pod "pvc-snapshottable-tester-khd4q" to be fully deleted
Aug  8 16:33:40.040: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iottxqv] to have phase Bound
Aug  8 16:33:40.043: INFO: PersistentVolumeClaim hostpath.csi.k8s.iottxqv found and phase=Bound (3.03146ms)
STEP: [init] checking the claim
... skipping 11 lines ...
[It] should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
STEP: checking the snapshot
STEP: checking the SnapshotContent
STEP: Modifying source data test
STEP: modifying the data in the source PVC
Aug  8 16:33:42.086: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-kl58j" in namespace "snapshotting-5524" to be "Succeeded or Failed"
Aug  8 16:33:42.092: INFO: Pod "pvc-snapshottable-data-tester-kl58j": Phase="Pending", Reason="", readiness=false. Elapsed: 3.73411ms
Aug  8 16:33:44.098: INFO: Pod "pvc-snapshottable-data-tester-kl58j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010202233s
Aug  8 16:33:46.104: INFO: Pod "pvc-snapshottable-data-tester-kl58j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016288616s
Aug  8 16:33:48.107: INFO: Pod "pvc-snapshottable-data-tester-kl58j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019519826s
Aug  8 16:33:50.111: INFO: Pod "pvc-snapshottable-data-tester-kl58j": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023515854s
Aug  8 16:33:52.116: INFO: Pod "pvc-snapshottable-data-tester-kl58j": Phase="Pending", Reason="", readiness=false. Elapsed: 10.028231519s
Aug  8 16:33:54.120: INFO: Pod "pvc-snapshottable-data-tester-kl58j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.032004412s
STEP: Saw pod success
Aug  8 16:33:54.120: INFO: Pod "pvc-snapshottable-data-tester-kl58j" satisfied condition "Succeeded or Failed"
Aug  8 16:33:54.129: INFO: Pod pvc-snapshottable-data-tester-kl58j has the following logs: 
Aug  8 16:33:54.129: INFO: Deleting pod "pvc-snapshottable-data-tester-kl58j" in namespace "snapshotting-5524"
Aug  8 16:33:54.142: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-kl58j" to be fully deleted
STEP: creating a pvc from the snapshot
STEP: starting a pod to use the claim
Aug  8 16:34:04.164: INFO: Running '/usr/local/bin/kubectl --server=https://127.0.0.1:42823 --kubeconfig=/root/.kube/config --namespace=snapshotting-5524 exec restored-pvc-tester-hbztj --namespace=snapshotting-5524 -- cat /mnt/test/data'
... skipping 42 lines ...
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:108
      
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:225
        should check snapshot fields, check restore correctly works after modifying source data, check deletion
        /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  should check snapshot fields, check restore correctly works after modifying source data, check deletion","total":-1,"completed":1,"skipped":1,"failed":0}
Aug  8 16:34:53.434: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  
  should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
... skipping 16 lines ...
STEP: creating a claim
Aug  8 16:33:27.950: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 16:33:27.975: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io7ndxs] to have phase Bound
Aug  8 16:33:27.981: INFO: PersistentVolumeClaim hostpath.csi.k8s.io7ndxs found but phase is Pending instead of Bound.
Aug  8 16:33:29.993: INFO: PersistentVolumeClaim hostpath.csi.k8s.io7ndxs found and phase=Bound (2.01741659s)
STEP: [init] starting a pod to use the claim
Aug  8 16:33:30.023: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-rbcdv" in namespace "snapshotting-435" to be "Succeeded or Failed"
Aug  8 16:33:30.031: INFO: Pod "pvc-snapshottable-tester-rbcdv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.430179ms
Aug  8 16:33:32.034: INFO: Pod "pvc-snapshottable-tester-rbcdv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011594065s
Aug  8 16:33:34.039: INFO: Pod "pvc-snapshottable-tester-rbcdv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01639011s
Aug  8 16:33:36.043: INFO: Pod "pvc-snapshottable-tester-rbcdv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019978295s
Aug  8 16:33:38.048: INFO: Pod "pvc-snapshottable-tester-rbcdv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.025093415s
Aug  8 16:33:40.051: INFO: Pod "pvc-snapshottable-tester-rbcdv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.028496401s
Aug  8 16:33:42.056: INFO: Pod "pvc-snapshottable-tester-rbcdv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.033341036s
Aug  8 16:33:44.061: INFO: Pod "pvc-snapshottable-tester-rbcdv": Phase="Pending", Reason="", readiness=false. Elapsed: 14.038064095s
Aug  8 16:33:46.066: INFO: Pod "pvc-snapshottable-tester-rbcdv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.043552592s
STEP: Saw pod success
Aug  8 16:33:46.066: INFO: Pod "pvc-snapshottable-tester-rbcdv" satisfied condition "Succeeded or Failed"
Aug  8 16:33:46.076: INFO: Pod pvc-snapshottable-tester-rbcdv has the following logs: 
Aug  8 16:33:46.076: INFO: Deleting pod "pvc-snapshottable-tester-rbcdv" in namespace "snapshotting-435"
Aug  8 16:33:46.084: INFO: Wait up to 5m0s for pod "pvc-snapshottable-tester-rbcdv" to be fully deleted
Aug  8 16:33:46.089: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io7ndxs] to have phase Bound
Aug  8 16:33:46.091: INFO: PersistentVolumeClaim hostpath.csi.k8s.io7ndxs found and phase=Bound (2.435204ms)
STEP: [init] checking the claim
... skipping 33 lines ...
[It] should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
STEP: checking the snapshot
STEP: checking the SnapshotContent
STEP: Modifying source data test
STEP: modifying the data in the source PVC
Aug  8 16:33:56.235: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-75t65" in namespace "snapshotting-435" to be "Succeeded or Failed"
Aug  8 16:33:56.238: INFO: Pod "pvc-snapshottable-data-tester-75t65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.896981ms
Aug  8 16:33:58.242: INFO: Pod "pvc-snapshottable-data-tester-75t65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006863669s
Aug  8 16:34:00.246: INFO: Pod "pvc-snapshottable-data-tester-75t65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010565639s
Aug  8 16:34:02.251: INFO: Pod "pvc-snapshottable-data-tester-75t65": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015257795s
Aug  8 16:34:04.255: INFO: Pod "pvc-snapshottable-data-tester-75t65": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019418203s
Aug  8 16:34:06.259: INFO: Pod "pvc-snapshottable-data-tester-75t65": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023804504s
Aug  8 16:34:08.264: INFO: Pod "pvc-snapshottable-data-tester-75t65": Phase="Pending", Reason="", readiness=false. Elapsed: 12.028890411s
Aug  8 16:34:10.269: INFO: Pod "pvc-snapshottable-data-tester-75t65": Phase="Pending", Reason="", readiness=false. Elapsed: 14.033801189s
Aug  8 16:34:12.273: INFO: Pod "pvc-snapshottable-data-tester-75t65": Phase="Pending", Reason="", readiness=false. Elapsed: 16.037293945s
Aug  8 16:34:14.277: INFO: Pod "pvc-snapshottable-data-tester-75t65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.041357062s
STEP: Saw pod success
Aug  8 16:34:14.277: INFO: Pod "pvc-snapshottable-data-tester-75t65" satisfied condition "Succeeded or Failed"
Aug  8 16:34:14.286: INFO: Pod pvc-snapshottable-data-tester-75t65 has the following logs: 
Aug  8 16:34:14.286: INFO: Deleting pod "pvc-snapshottable-data-tester-75t65" in namespace "snapshotting-435"
Aug  8 16:34:14.296: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-75t65" to be fully deleted
STEP: creating a pvc from the snapshot
STEP: starting a pod to use the claim
Aug  8 16:34:20.329: INFO: Running '/usr/local/bin/kubectl --server=https://127.0.0.1:42823 --kubeconfig=/root/.kube/config --namespace=snapshotting-435 exec restored-pvc-tester-mm9dx --namespace=snapshotting-435 -- cat /mnt/test/data'
... skipping 42 lines ...
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:108
      
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:225
        should check snapshot fields, check restore correctly works after modifying source data, check deletion
        /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  should check snapshot fields, check restore correctly works after modifying source data, check deletion","total":-1,"completed":1,"skipped":81,"failed":0}
Aug  8 16:35:03.587: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  
  should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
... skipping 16 lines ...
STEP: creating a claim
Aug  8 16:33:27.935: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 16:33:27.950: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iotkx5d] to have phase Bound
Aug  8 16:33:27.954: INFO: PersistentVolumeClaim hostpath.csi.k8s.iotkx5d found but phase is Pending instead of Bound.
Aug  8 16:33:29.958: INFO: PersistentVolumeClaim hostpath.csi.k8s.iotkx5d found and phase=Bound (2.008456075s)
STEP: [init] starting a pod to use the claim
Aug  8 16:33:29.971: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-p2mf8" in namespace "snapshotting-3910" to be "Succeeded or Failed"
Aug  8 16:33:29.987: INFO: Pod "pvc-snapshottable-tester-p2mf8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.832877ms
Aug  8 16:33:31.991: INFO: Pod "pvc-snapshottable-tester-p2mf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019852188s
Aug  8 16:33:33.995: INFO: Pod "pvc-snapshottable-tester-p2mf8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024041785s
Aug  8 16:33:35.999: INFO: Pod "pvc-snapshottable-tester-p2mf8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027954811s
Aug  8 16:33:38.005: INFO: Pod "pvc-snapshottable-tester-p2mf8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.033314666s
Aug  8 16:33:40.008: INFO: Pod "pvc-snapshottable-tester-p2mf8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.036951913s
Aug  8 16:33:42.012: INFO: Pod "pvc-snapshottable-tester-p2mf8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.040540143s
Aug  8 16:33:44.017: INFO: Pod "pvc-snapshottable-tester-p2mf8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.045660223s
Aug  8 16:33:46.022: INFO: Pod "pvc-snapshottable-tester-p2mf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.050376871s
STEP: Saw pod success
Aug  8 16:33:46.022: INFO: Pod "pvc-snapshottable-tester-p2mf8" satisfied condition "Succeeded or Failed"
Aug  8 16:33:46.032: INFO: Pod pvc-snapshottable-tester-p2mf8 has the following logs: 
Aug  8 16:33:46.032: INFO: Deleting pod "pvc-snapshottable-tester-p2mf8" in namespace "snapshotting-3910"
Aug  8 16:33:46.046: INFO: Wait up to 5m0s for pod "pvc-snapshottable-tester-p2mf8" to be fully deleted
Aug  8 16:33:46.052: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iotkx5d] to have phase Bound
Aug  8 16:33:46.056: INFO: PersistentVolumeClaim hostpath.csi.k8s.iotkx5d found and phase=Bound (3.532102ms)
STEP: [init] checking the claim
... skipping 12 lines ...
[It] should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
STEP: checking the snapshot
STEP: checking the SnapshotContent
STEP: Modifying source data test
STEP: modifying the data in the source PVC
Aug  8 16:33:50.117: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-cpq8s" in namespace "snapshotting-3910" to be "Succeeded or Failed"
Aug  8 16:33:50.120: INFO: Pod "pvc-snapshottable-data-tester-cpq8s": Phase="Pending", Reason="", readiness=false. Elapsed: 3.198868ms
Aug  8 16:33:52.126: INFO: Pod "pvc-snapshottable-data-tester-cpq8s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008895158s
Aug  8 16:33:54.130: INFO: Pod "pvc-snapshottable-data-tester-cpq8s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01280656s
Aug  8 16:33:56.135: INFO: Pod "pvc-snapshottable-data-tester-cpq8s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01749482s
Aug  8 16:33:58.139: INFO: Pod "pvc-snapshottable-data-tester-cpq8s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021961734s
Aug  8 16:34:00.144: INFO: Pod "pvc-snapshottable-data-tester-cpq8s": Phase="Pending", Reason="", readiness=false. Elapsed: 10.026938159s
Aug  8 16:34:02.149: INFO: Pod "pvc-snapshottable-data-tester-cpq8s": Phase="Pending", Reason="", readiness=false. Elapsed: 12.031706343s
Aug  8 16:34:04.153: INFO: Pod "pvc-snapshottable-data-tester-cpq8s": Phase="Pending", Reason="", readiness=false. Elapsed: 14.035587086s
Aug  8 16:34:06.157: INFO: Pod "pvc-snapshottable-data-tester-cpq8s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.040282986s
STEP: Saw pod success
Aug  8 16:34:06.157: INFO: Pod "pvc-snapshottable-data-tester-cpq8s" satisfied condition "Succeeded or Failed"
Aug  8 16:34:06.166: INFO: Pod pvc-snapshottable-data-tester-cpq8s has the following logs: 
Aug  8 16:34:06.166: INFO: Deleting pod "pvc-snapshottable-data-tester-cpq8s" in namespace "snapshotting-3910"
Aug  8 16:34:06.177: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-cpq8s" to be fully deleted
STEP: creating a pvc from the snapshot
STEP: starting a pod to use the claim
Aug  8 16:34:26.204: INFO: Running '/usr/local/bin/kubectl --server=https://127.0.0.1:42823 --kubeconfig=/root/.kube/config --namespace=snapshotting-3910 exec restored-pvc-tester-skxjk --namespace=snapshotting-3910 -- cat /mnt/test/data'
... skipping 33 lines ...
Aug  8 16:34:50.522: INFO: volumesnapshotcontents snapcontent-1c57c23d-61eb-40e0-a0e8-d2b6ca8bd5a1 has been found and is not deleted
Aug  8 16:34:51.526: INFO: volumesnapshotcontents snapcontent-1c57c23d-61eb-40e0-a0e8-d2b6ca8bd5a1 has been found and is not deleted
Aug  8 16:34:52.531: INFO: volumesnapshotcontents snapcontent-1c57c23d-61eb-40e0-a0e8-d2b6ca8bd5a1 has been found and is not deleted
Aug  8 16:34:53.535: INFO: volumesnapshotcontents snapcontent-1c57c23d-61eb-40e0-a0e8-d2b6ca8bd5a1 has been found and is not deleted
Aug  8 16:34:54.540: INFO: volumesnapshotcontents snapcontent-1c57c23d-61eb-40e0-a0e8-d2b6ca8bd5a1 has been found and is not deleted
Aug  8 16:34:55.545: INFO: volumesnapshotcontents snapcontent-1c57c23d-61eb-40e0-a0e8-d2b6ca8bd5a1 has been found and is not deleted
Aug  8 16:34:56.545: INFO: WaitUntil failed after reaching the timeout 30s
[AfterEach] volume snapshot controller
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:221
Aug  8 16:34:56.551: INFO: Pod restored-pvc-tester-skxjk has the following logs: 
Aug  8 16:34:56.551: INFO: Deleting pod "restored-pvc-tester-skxjk" in namespace "snapshotting-3910"
Aug  8 16:34:56.560: INFO: Wait up to 5m0s for pod "restored-pvc-tester-skxjk" to be fully deleted
Aug  8 16:35:38.567: INFO: deleting claim "snapshotting-3910"/"pvc-vcc52"
... skipping 28 lines ...
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:108
      
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:225
        should check snapshot fields, check restore correctly works after modifying source data, check deletion
        /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  should check snapshot fields, check restore correctly works after modifying source data, check deletion","total":-1,"completed":1,"skipped":62,"failed":0}
Aug  8 16:35:45.631: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  
  should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
... skipping 16 lines ...
STEP: creating a claim
Aug  8 16:33:27.954: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 16:33:27.981: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iok6tqk] to have phase Bound
Aug  8 16:33:28.004: INFO: PersistentVolumeClaim hostpath.csi.k8s.iok6tqk found but phase is Pending instead of Bound.
Aug  8 16:33:30.015: INFO: PersistentVolumeClaim hostpath.csi.k8s.iok6tqk found and phase=Bound (2.033913855s)
STEP: [init] starting a pod to use the claim
Aug  8 16:33:30.050: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-ms69l" in namespace "snapshotting-1620" to be "Succeeded or Failed"
Aug  8 16:33:30.059: INFO: Pod "pvc-snapshottable-tester-ms69l": Phase="Pending", Reason="", readiness=false. Elapsed: 9.602096ms
Aug  8 16:33:32.064: INFO: Pod "pvc-snapshottable-tester-ms69l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013703326s
Aug  8 16:33:34.067: INFO: Pod "pvc-snapshottable-tester-ms69l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01732076s
Aug  8 16:33:36.072: INFO: Pod "pvc-snapshottable-tester-ms69l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022261717s
Aug  8 16:33:38.076: INFO: Pod "pvc-snapshottable-tester-ms69l": Phase="Pending", Reason="", readiness=false. Elapsed: 8.02642772s
Aug  8 16:33:40.080: INFO: Pod "pvc-snapshottable-tester-ms69l": Phase="Pending", Reason="", readiness=false. Elapsed: 10.030604546s
Aug  8 16:33:42.085: INFO: Pod "pvc-snapshottable-tester-ms69l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.035075839s
STEP: Saw pod success
Aug  8 16:33:42.085: INFO: Pod "pvc-snapshottable-tester-ms69l" satisfied condition "Succeeded or Failed"
Aug  8 16:33:42.098: INFO: Pod pvc-snapshottable-tester-ms69l has the following logs: 
Aug  8 16:33:42.098: INFO: Deleting pod "pvc-snapshottable-tester-ms69l" in namespace "snapshotting-1620"
Aug  8 16:33:42.109: INFO: Wait up to 5m0s for pod "pvc-snapshottable-tester-ms69l" to be fully deleted
Aug  8 16:33:42.111: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iok6tqk] to have phase Bound
Aug  8 16:33:42.114: INFO: PersistentVolumeClaim hostpath.csi.k8s.iok6tqk found and phase=Bound (2.443677ms)
STEP: [init] checking the claim
... skipping 32 lines ...
[It] should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
STEP: checking the snapshot
STEP: checking the SnapshotContent
STEP: Modifying source data test
STEP: modifying the data in the source PVC
Aug  8 16:33:50.230: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-tprl9" in namespace "snapshotting-1620" to be "Succeeded or Failed"
Aug  8 16:33:50.233: INFO: Pod "pvc-snapshottable-data-tester-tprl9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.710818ms
Aug  8 16:33:52.238: INFO: Pod "pvc-snapshottable-data-tester-tprl9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007221016s
Aug  8 16:33:54.243: INFO: Pod "pvc-snapshottable-data-tester-tprl9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012318504s
Aug  8 16:33:56.246: INFO: Pod "pvc-snapshottable-data-tester-tprl9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015525822s
Aug  8 16:33:58.250: INFO: Pod "pvc-snapshottable-data-tester-tprl9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019487671s
Aug  8 16:34:00.254: INFO: Pod "pvc-snapshottable-data-tester-tprl9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023410472s
Aug  8 16:34:02.257: INFO: Pod "pvc-snapshottable-data-tester-tprl9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.027170889s
Aug  8 16:34:04.262: INFO: Pod "pvc-snapshottable-data-tester-tprl9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.031251539s
Aug  8 16:34:06.266: INFO: Pod "pvc-snapshottable-data-tester-tprl9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.035382674s
STEP: Saw pod success
Aug  8 16:34:06.266: INFO: Pod "pvc-snapshottable-data-tester-tprl9" satisfied condition "Succeeded or Failed"
Aug  8 16:34:06.274: INFO: Pod pvc-snapshottable-data-tester-tprl9 has the following logs: 
Aug  8 16:34:06.274: INFO: Deleting pod "pvc-snapshottable-data-tester-tprl9" in namespace "snapshotting-1620"
Aug  8 16:34:06.284: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-tprl9" to be fully deleted
STEP: creating a pvc from the snapshot
STEP: starting a pod to use the claim
Aug  8 16:34:26.310: INFO: Running '/usr/local/bin/kubectl --server=https://127.0.0.1:42823 --kubeconfig=/root/.kube/config --namespace=snapshotting-1620 exec restored-pvc-tester-mpgrc --namespace=snapshotting-1620 -- cat /mnt/test/data'
... skipping 33 lines ...
Aug  8 16:34:50.633: INFO: volumesnapshotcontents pre-provisioned-snapcontent-5157a495-15f2-4689-9858-7f46d8961373 has been found and is not deleted
Aug  8 16:34:51.637: INFO: volumesnapshotcontents pre-provisioned-snapcontent-5157a495-15f2-4689-9858-7f46d8961373 has been found and is not deleted
Aug  8 16:34:52.643: INFO: volumesnapshotcontents pre-provisioned-snapcontent-5157a495-15f2-4689-9858-7f46d8961373 has been found and is not deleted
Aug  8 16:34:53.647: INFO: volumesnapshotcontents pre-provisioned-snapcontent-5157a495-15f2-4689-9858-7f46d8961373 has been found and is not deleted
Aug  8 16:34:54.652: INFO: volumesnapshotcontents pre-provisioned-snapcontent-5157a495-15f2-4689-9858-7f46d8961373 has been found and is not deleted
Aug  8 16:34:55.657: INFO: volumesnapshotcontents pre-provisioned-snapcontent-5157a495-15f2-4689-9858-7f46d8961373 has been found and is not deleted
Aug  8 16:34:56.658: INFO: WaitUntil failed after reaching the timeout 30s
[AfterEach] volume snapshot controller
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:221
Aug  8 16:34:56.665: INFO: Pod restored-pvc-tester-mpgrc has the following logs: 
Aug  8 16:34:56.665: INFO: Deleting pod "restored-pvc-tester-mpgrc" in namespace "snapshotting-1620"
Aug  8 16:34:56.670: INFO: Wait up to 5m0s for pod "restored-pvc-tester-mpgrc" to be fully deleted
Aug  8 16:35:38.678: INFO: deleting claim "snapshotting-1620"/"pvc-kpx8g"
... skipping 28 lines ...
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:108
      
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:225
        should check snapshot fields, check restore correctly works after modifying source data, check deletion
        /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  should check snapshot fields, check restore correctly works after modifying source data, check deletion","total":-1,"completed":1,"skipped":238,"failed":0}
Aug  8 16:35:45.745: INFO: Running AfterSuite actions on all nodes


Aug  8 16:33:27.792: INFO: Running AfterSuite actions on all nodes
Aug  8 16:35:45.792: INFO: Running AfterSuite actions on node 1
Aug  8 16:35:45.792: INFO: Dumping logs locally to: /logs/artifacts
Aug  8 16:35:45.792: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec ../../cluster/log-dump/log-dump.sh: no such file or directory


Ran 6 of 5976 Specs in 143.491 seconds
SUCCESS! -- 6 Passed | 0 Failed | 0 Pending | 5970 Skipped


Ginkgo ran 1 suite in 2m44.902866742s
Test Suite Passed
Mon Aug  8 16:35:45 UTC 2022 go1.18 /home/prow/go/src/k8s.io/kubernetes$ go run /home/prow/go/src/github.com/kubernetes-csi/csi-driver-host-path/release-tools/filter-junit.go -t=External.Storage|CSI.mock.volume -o /logs/artifacts/junit_parallel-features.xml /logs/artifacts/junit_01.xml /logs/artifacts/junit_02.xml /logs/artifacts/junit_03.xml /logs/artifacts/junit_04.xml /logs/artifacts/junit_05.xml /logs/artifacts/junit_06.xml /logs/artifacts/junit_07.xml /logs/artifacts/junit_08.xml /logs/artifacts/junit_09.xml /logs/artifacts/junit_10.xml /logs/artifacts/junit_11.xml /logs/artifacts/junit_12.xml /logs/artifacts/junit_13.xml /logs/artifacts/junit_14.xml /logs/artifacts/junit_15.xml /logs/artifacts/junit_16.xml /logs/artifacts/junit_17.xml /logs/artifacts/junit_18.xml /logs/artifacts/junit_19.xml /logs/artifacts/junit_20.xml /logs/artifacts/junit_21.xml /logs/artifacts/junit_22.xml /logs/artifacts/junit_23.xml /logs/artifacts/junit_24.xml /logs/artifacts/junit_25.xml /logs/artifacts/junit_26.xml /logs/artifacts/junit_27.xml /logs/artifacts/junit_28.xml /logs/artifacts/junit_29.xml /logs/artifacts/junit_30.xml /logs/artifacts/junit_31.xml /logs/artifacts/junit_32.xml /logs/artifacts/junit_33.xml /logs/artifacts/junit_34.xml /logs/artifacts/junit_35.xml /logs/artifacts/junit_36.xml /logs/artifacts/junit_37.xml /logs/artifacts/junit_38.xml /logs/artifacts/junit_39.xml /logs/artifacts/junit_40.xml
go: updates to go.mod needed, disabled by -mod=vendor
... skipping 5 lines ...
	to update it:
	go mod tidy
Mon Aug  8 16:35:46 UTC 2022 go1.19 /home/prow/go/src/k8s.io/kubernetes$ env KUBECONFIG=/root/.kube/config KUBE_TEST_REPO_LIST=/home/prow/go/pkg/csiprow.3zhlDRENWi/e2e-repo-list ginkgo -v -focus=External.Storage.*(\[Serial\]|\[Disruptive\]) -skip=\[Feature:|Disruptive /home/prow/go/pkg/csiprow.3zhlDRENWi/e2e.test -- -report-dir /logs/artifacts -storage.testdriver=/home/prow/go/pkg/csiprow.3zhlDRENWi/test-driver.yaml
Aug  8 16:35:48.368: INFO: Driver loaded from path [/home/prow/go/pkg/csiprow.3zhlDRENWi/test-driver.yaml]: &{DriverInfo:{Name:hostpath.csi.k8s.io InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max: Min:1Mi} SupportedFsType:map[:{}] SupportedMountOption:map[] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true multipods:true nodeExpansion:true persistence:true singleNodeVolume:true snapshotDataSource:true topology:true] RequiredAccessModes:[] TopologyKeys:[] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:true FromFile: FromExistingClassName:} SnapshotClass:{FromName:true FromFile: FromExistingClassName:} InlineVolumes:[{Attributes:map[] Shared:false ReadOnly:false}] ClientNodeName:csi-prow-worker2 Timeouts:map[]}
Aug  8 16:35:48.429: INFO: The --provider flag is not set. Continuing as if --provider=skeleton had been used.
I0808 16:35:48.430034  102568 e2e.go:129] Starting e2e run "94d11393-e82c-4777-92ab-7aa5191314b3" on Ginkgo node 1
{"msg":"Test Suite starting","total":4,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1659976546 - Will randomize all specs
Will run 4 of 5976 specs

Aug  8 16:35:48.505: INFO: >>> kubeConfig: /root/.kube/config
... skipping 113 lines ...

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumelimits.go:126
------------------------------
SSSSSSSSSSSSSSSSSAug  8 16:35:48.668: INFO: Running AfterSuite actions on all nodes
Aug  8 16:35:48.668: INFO: Running AfterSuite actions on node 1
Aug  8 16:35:48.668: INFO: Dumping logs locally to: /logs/artifacts
Aug  8 16:35:48.668: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec ../../cluster/log-dump/log-dump.sh: no such file or directory

JUnit report was created: /logs/artifacts/junit_01.xml
{"msg":"Test Suite completed","total":4,"completed":0,"skipped":5976,"failed":0}

Ran 0 of 5976 Specs in 0.166 seconds
SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 5976 Skipped
PASS

Ginkgo ran 1 suite in 2.150277822s
Test Suite Passed
Mon Aug  8 16:35:48 UTC 2022 go1.18 /home/prow/go/src/k8s.io/kubernetes$ go run /home/prow/go/src/github.com/kubernetes-csi/csi-driver-host-path/release-tools/filter-junit.go -t=External.Storage|CSI.mock.volume -o /logs/artifacts/junit_serial.xml /logs/artifacts/junit_01.xml /logs/artifacts/junit_02.xml /logs/artifacts/junit_03.xml /logs/artifacts/junit_04.xml /logs/artifacts/junit_05.xml /logs/artifacts/junit_06.xml /logs/artifacts/junit_07.xml /logs/artifacts/junit_08.xml /logs/artifacts/junit_09.xml /logs/artifacts/junit_10.xml /logs/artifacts/junit_11.xml /logs/artifacts/junit_12.xml /logs/artifacts/junit_13.xml /logs/artifacts/junit_14.xml /logs/artifacts/junit_15.xml /logs/artifacts/junit_16.xml /logs/artifacts/junit_17.xml /logs/artifacts/junit_18.xml /logs/artifacts/junit_19.xml /logs/artifacts/junit_20.xml /logs/artifacts/junit_21.xml /logs/artifacts/junit_22.xml /logs/artifacts/junit_23.xml /logs/artifacts/junit_24.xml /logs/artifacts/junit_25.xml /logs/artifacts/junit_26.xml /logs/artifacts/junit_27.xml /logs/artifacts/junit_28.xml /logs/artifacts/junit_29.xml /logs/artifacts/junit_30.xml /logs/artifacts/junit_31.xml /logs/artifacts/junit_32.xml /logs/artifacts/junit_33.xml /logs/artifacts/junit_34.xml /logs/artifacts/junit_35.xml /logs/artifacts/junit_36.xml /logs/artifacts/junit_37.xml /logs/artifacts/junit_38.xml /logs/artifacts/junit_39.xml /logs/artifacts/junit_40.xml
go: updates to go.mod needed, disabled by -mod=vendor
... skipping 22 lines ...