This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 67 succeeded
Started2022-08-08 10:18
Elapsed17m39s
Revisionmaster

No Test Failures!


Show 67 Passed Tests

Show 11 Skipped Tests

Error lines from build-log.txt

... skipping 49 lines ...
non alpha feature gates for latest Kubernetes: CSI_PROW_E2E_GATES_LATEST=
non alpha E2E feature gates: CSI_PROW_E2E_GATES=
external-snapshotter version tag: CSI_SNAPSHOTTER_VERSION=master
tests that need to be skipped: CSI_PROW_E2E_SKIP=Disruptive
work directory: CSI_PROW_WORK=/home/prow/go/pkg/csiprow.Y65keq55jN
artifacts: ARTIFACTS=/logs/artifacts
Mon Aug  8 10:18:15 UTC 2022 go1.19 $ curl --fail --location -o /home/prow/go/pkg/csiprow.Y65keq55jN/bin/kind https://github.com/kubernetes-sigs/kind/releases/download/v0.11.1/kind-linux-amd64
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0

100 6660k  100 6660k    0     0  23.5M      0 --:--:-- --:--:-- --:--:-- 23.5M
No kind clusters found.
INFO: kind-config.yaml:
... skipping 169 lines ...
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at 306d58d Merge pull request #383 from pohly/changelog-5.0.0
Mon Aug  8 10:20:13 UTC 2022 go1.19 /home/prow/go/src/github.com/kubernetes-csi/csi-test$ git clean -fdx
Mon Aug  8 10:20:13 UTC 2022 go1.19 /home/prow/go/src/github.com/kubernetes-csi/csi-test/cmd/csi-sanity$ curl --fail --location https://dl.google.com/go/go1.18.linux-amd64.tar.gz
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
 11  135M   11 15.7M    0     0  32.1M      0  0:00:04 --:--:--  0:00:04 32.0M
 36  135M   36 48.6M    0     0  31.8M      0  0:00:04  0:00:01  0:00:03 31.8M
 68  135M   68 92.4M    0     0  36.9M      0  0:00:03  0:00:02  0:00:01 36.9M
 97  135M   97  131M    0     0  37.6M      0  0:00:03  0:00:03 --:--:-- 37.6M
100  135M  100  135M    0     0  36.4M      0  0:00:03  0:00:03 --:--:-- 36.4M
Mon Aug  8 10:20:16 UTC 2022 go1.18 /home/prow/go/src/github.com/kubernetes-csi/csi-test/cmd/csi-sanity$ go build -o /home/prow/go/pkg/csiprow.Y65keq55jN/csi-sanity
Mon Aug  8 10:20:27 UTC 2022 go1.19 $ /home/prow/go/pkg/csiprow.Y65keq55jN/csi-sanity -ginkgo.v -csi.junitfile /logs/artifacts/junit_sanity.xml -csi.endpoint dns:///172.18.0.3:31752 -csi.stagingdir /tmp/staging -csi.mountdir /tmp/mount -csi.createstagingpathcmd /home/prow/go/pkg/csiprow.Y65keq55jN/mkdir_in_pod.sh -csi.createmountpathcmd /home/prow/go/pkg/csiprow.Y65keq55jN/mkdir_in_pod.sh -csi.removestagingpathcmd /home/prow/go/pkg/csiprow.Y65keq55jN/rmdir_in_pod.sh -csi.removemountpathcmd /home/prow/go/pkg/csiprow.Y65keq55jN/rmdir_in_pod.sh -csi.checkpathcmd /home/prow/go/pkg/csiprow.Y65keq55jN/checkdir_in_pod.sh
Running Suite: CSI Driver Test Suite - /home/prow/go/src/github.com/kubernetes-csi/csi-driver-host-path
... skipping 154 lines ...
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:32.127
    STEP: creating mount and staging directories 08/08/22 10:20:32.127
    STEP: creating required new volumes 08/08/22 10:20:32.459
  << End Captured GinkgoWriter Output
------------------------------
CreateSnapshot [Controller Server]
  should fail when no name is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1422
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:32.959
STEP: creating mount and staging directories 08/08/22 10:20:32.959
------------------------------
• [0.649 seconds]
CreateSnapshot [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  should fail when no name is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1422

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:32.959
    STEP: creating mount and staging directories 08/08/22 10:20:32.959
  << End Captured GinkgoWriter Output
------------------------------
CreateSnapshot [Controller Server]
  should fail when no source volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1439
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:33.609
STEP: creating mount and staging directories 08/08/22 10:20:33.609
------------------------------
• [0.670 seconds]
CreateSnapshot [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  should fail when no source volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1439

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:33.609
    STEP: creating mount and staging directories 08/08/22 10:20:33.609
  << End Captured GinkgoWriter Output
... skipping 21 lines ...
    STEP: creating a volume 08/08/22 10:20:34.632
    STEP: creating a snapshot 08/08/22 10:20:34.633
    STEP: creating a snapshot with the same name and source volume ID 08/08/22 10:20:34.637
  << End Captured GinkgoWriter Output
------------------------------
CreateSnapshot [Controller Server]
  should fail when requesting to create a snapshot with already existing name and different source volume ID
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1470
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:35.047
STEP: creating mount and staging directories 08/08/22 10:20:35.047
STEP: creating a snapshot 08/08/22 10:20:35.38
STEP: creating a new source volume 08/08/22 10:20:35.385
STEP: creating a snapshot with the same name but different source volume ID 08/08/22 10:20:35.387
I0808 10:20:35.392137   11963 resources.go:320] deleting snapshot ID bdbac7c8-1703-11ed-90e9-26b11af8785d
------------------------------
• [0.740 seconds]
CreateSnapshot [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  should fail when requesting to create a snapshot with already existing name and different source volume ID
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1470

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:35.047
    STEP: creating mount and staging directories 08/08/22 10:20:35.047
    STEP: creating a snapshot 08/08/22 10:20:35.38
... skipping 85 lines ...
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:37.957
    STEP: creating mount and staging directories 08/08/22 10:20:37.957
    STEP: verifying name size and characters 08/08/22 10:20:38.326
  << End Captured GinkgoWriter Output
------------------------------
ExpandVolume [Controller Server]
  should fail if no volume id is given
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1528
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:38.662
STEP: creating mount and staging directories 08/08/22 10:20:38.662
------------------------------
• [0.762 seconds]
ExpandVolume [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  should fail if no volume id is given
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1528

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:38.662
    STEP: creating mount and staging directories 08/08/22 10:20:38.662
  << End Captured GinkgoWriter Output
------------------------------
ExpandVolume [Controller Server]
  should fail if no capacity range is given
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1545
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:39.424
STEP: creating mount and staging directories 08/08/22 10:20:39.424
------------------------------
• [0.765 seconds]
ExpandVolume [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  should fail if no capacity range is given
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1545

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:39.424
    STEP: creating mount and staging directories 08/08/22 10:20:39.424
  << End Captured GinkgoWriter Output
... skipping 17 lines ...
    STEP: creating mount and staging directories 08/08/22 10:20:40.189
    STEP: creating a new volume 08/08/22 10:20:40.542
    STEP: expanding the volume 08/08/22 10:20:40.543
  << End Captured GinkgoWriter Output
------------------------------
DeleteSnapshot [Controller Server]
  should fail when no snapshot id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1366
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:40.897
STEP: creating mount and staging directories 08/08/22 10:20:40.897
------------------------------
• [0.688 seconds]
DeleteSnapshot [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  should fail when no snapshot id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1366

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:40.897
    STEP: creating mount and staging directories 08/08/22 10:20:40.897
  << End Captured GinkgoWriter Output
... skipping 94 lines ...
  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:44.352
    STEP: creating mount and staging directories 08/08/22 10:20:44.352
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] ListVolumes
  should fail when an invalid starting_token is passed
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:194
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:45.044
STEP: creating mount and staging directories 08/08/22 10:20:45.045
------------------------------
• [0.696 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ListVolumes
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:175
    should fail when an invalid starting_token is passed
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:194

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:45.044
    STEP: creating mount and staging directories 08/08/22 10:20:45.045
  << End Captured GinkgoWriter Output
... skipping 23 lines ...
------------------------------
P [PENDING]
Controller Service [Controller Server] ListVolumes pagination should detect volumes added between pages and accept tokens when the last volume from a page is deleted
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:268
------------------------------
Controller Service [Controller Server] CreateVolume
  should fail when no name is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:376
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:46.468
STEP: creating mount and staging directories 08/08/22 10:20:46.468
------------------------------
• [0.695 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  CreateVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:369
    should fail when no name is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:376

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:46.468
    STEP: creating mount and staging directories 08/08/22 10:20:46.468
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] CreateVolume
  should fail when no volume capabilities are provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:391
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:47.163
STEP: creating mount and staging directories 08/08/22 10:20:47.163
------------------------------
• [0.705 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  CreateVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:369
    should fail when no volume capabilities are provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:391

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:47.163
    STEP: creating mount and staging directories 08/08/22 10:20:47.163
  << End Captured GinkgoWriter Output
... skipping 38 lines ...
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:48.944
    STEP: creating mount and staging directories 08/08/22 10:20:48.945
    STEP: creating a volume 08/08/22 10:20:49.352
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] CreateVolume
  should not fail when requesting to create a volume with already existing name and same capacity
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:460
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:49.787
STEP: creating mount and staging directories 08/08/22 10:20:49.787
STEP: creating a volume 08/08/22 10:20:50.156
------------------------------
• [0.731 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  CreateVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:369
    should not fail when requesting to create a volume with already existing name and same capacity
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:460

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:49.787
    STEP: creating mount and staging directories 08/08/22 10:20:49.787
    STEP: creating a volume 08/08/22 10:20:50.156
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] CreateVolume
  should fail when requesting to create a volume with already existing name and different capacity
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:501
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:50.519
STEP: creating mount and staging directories 08/08/22 10:20:50.519
STEP: creating a volume 08/08/22 10:20:50.869
------------------------------
• [0.692 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  CreateVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:369
    should fail when requesting to create a volume with already existing name and different capacity
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:501

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:50.519
    STEP: creating mount and staging directories 08/08/22 10:20:50.519
    STEP: creating a volume 08/08/22 10:20:50.869
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] CreateVolume
  should not fail when creating volume with maximum-length name
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:545
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:51.211
STEP: creating mount and staging directories 08/08/22 10:20:51.211
STEP: creating a volume 08/08/22 10:20:51.553
------------------------------
• [0.737 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  CreateVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:369
    should not fail when creating volume with maximum-length name
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:545

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:51.211
    STEP: creating mount and staging directories 08/08/22 10:20:51.211
    STEP: creating a volume 08/08/22 10:20:51.553
... skipping 21 lines ...
    STEP: creating mount and staging directories 08/08/22 10:20:51.949
    STEP: creating a snapshot 08/08/22 10:20:52.312
    STEP: creating a volume from source snapshot 08/08/22 10:20:52.317
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] CreateVolume
  should fail when the volume source snapshot is not found
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:595
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:52.699
STEP: creating mount and staging directories 08/08/22 10:20:52.699
STEP: creating a volume from source snapshot 08/08/22 10:20:53.041
------------------------------
• [0.702 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  CreateVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:369
    should fail when the volume source snapshot is not found
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:595

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:52.699
    STEP: creating mount and staging directories 08/08/22 10:20:52.699
    STEP: creating a volume from source snapshot 08/08/22 10:20:53.041
... skipping 20 lines ...
    STEP: creating mount and staging directories 08/08/22 10:20:53.401
    STEP: creating a volume 08/08/22 10:20:53.759
    STEP: creating a volume from source volume 08/08/22 10:20:53.761
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] CreateVolume
  should fail when the volume source volume is not found
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:641
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:54.16
STEP: creating mount and staging directories 08/08/22 10:20:54.16
STEP: creating a volume from source snapshot 08/08/22 10:20:54.522
------------------------------
• [0.723 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  CreateVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:369
    should fail when the volume source volume is not found
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:641

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:54.16
    STEP: creating mount and staging directories 08/08/22 10:20:54.16
    STEP: creating a volume from source snapshot 08/08/22 10:20:54.522
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] DeleteVolume
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:671
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:54.882
STEP: creating mount and staging directories 08/08/22 10:20:54.883
------------------------------
• [0.702 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  DeleteVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:664
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:671

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:54.882
    STEP: creating mount and staging directories 08/08/22 10:20:54.883
  << End Captured GinkgoWriter Output
... skipping 38 lines ...
    STEP: creating mount and staging directories 08/08/22 10:20:56.263
    STEP: creating a volume 08/08/22 10:20:56.606
    STEP: deleting a volume 08/08/22 10:20:56.608
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] ValidateVolumeCapabilities
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:734
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:56.96
STEP: creating mount and staging directories 08/08/22 10:20:56.96
------------------------------
• [0.679 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ValidateVolumeCapabilities
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:733
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:734

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:56.96
    STEP: creating mount and staging directories 08/08/22 10:20:56.96
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] ValidateVolumeCapabilities
  should fail when no volume capabilities are provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:748
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:57.639
STEP: creating mount and staging directories 08/08/22 10:20:57.639
STEP: creating a single node writer volume 08/08/22 10:20:57.987
------------------------------
• [0.753 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ValidateVolumeCapabilities
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:733
    should fail when no volume capabilities are provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:748

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:57.639
    STEP: creating mount and staging directories 08/08/22 10:20:57.639
    STEP: creating a single node writer volume 08/08/22 10:20:57.987
... skipping 20 lines ...
    STEP: creating mount and staging directories 08/08/22 10:20:58.392
    STEP: creating a single node writer volume 08/08/22 10:20:58.724
    STEP: validating volume capabilities 08/08/22 10:20:58.726
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] ValidateVolumeCapabilities
  should fail when the requested volume does not exist
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:825
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:59.117
STEP: creating mount and staging directories 08/08/22 10:20:59.117
------------------------------
• [0.755 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ValidateVolumeCapabilities
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:733
    should fail when the requested volume does not exist
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:825

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:59.117
    STEP: creating mount and staging directories 08/08/22 10:20:59.117
  << End Captured GinkgoWriter Output
------------------------------
Controller Service [Controller Server] ControllerPublishVolume
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:852
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:59.872
STEP: creating mount and staging directories 08/08/22 10:20:59.872
------------------------------
S [SKIPPED] [0.723 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ControllerPublishVolume [BeforeEach]
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:846
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:852

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:20:59.872
    STEP: creating mount and staging directories 08/08/22 10:20:59.872
  << End Captured GinkgoWriter Output

  ControllerPublishVolume not supported
  In [BeforeEach] at: /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:848
------------------------------
Controller Service [Controller Server] ControllerPublishVolume
  should fail when no node id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:867
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:00.595
STEP: creating mount and staging directories 08/08/22 10:21:00.595
------------------------------
S [SKIPPED] [0.716 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ControllerPublishVolume [BeforeEach]
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:846
    should fail when no node id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:867

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:00.595
    STEP: creating mount and staging directories 08/08/22 10:21:00.595
  << End Captured GinkgoWriter Output

  ControllerPublishVolume not supported
  In [BeforeEach] at: /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:848
------------------------------
Controller Service [Controller Server] ControllerPublishVolume
  should fail when no volume capability is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:883
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:01.311
STEP: creating mount and staging directories 08/08/22 10:21:01.311
------------------------------
S [SKIPPED] [0.681 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ControllerPublishVolume [BeforeEach]
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:846
    should fail when no volume capability is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:883

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:01.311
    STEP: creating mount and staging directories 08/08/22 10:21:01.311
  << End Captured GinkgoWriter Output

  ControllerPublishVolume not supported
  In [BeforeEach] at: /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:848
------------------------------
Controller Service [Controller Server] ControllerPublishVolume
  should fail when publishing more volumes than the node max attach limit
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:900
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:01.993
STEP: creating mount and staging directories 08/08/22 10:21:01.993
------------------------------
S [SKIPPED] [0.705 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ControllerPublishVolume [BeforeEach]
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:846
    should fail when publishing more volumes than the node max attach limit
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:900

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:01.993
    STEP: creating mount and staging directories 08/08/22 10:21:01.993
  << End Captured GinkgoWriter Output

  ControllerPublishVolume not supported
  In [BeforeEach] at: /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:848
------------------------------
Controller Service [Controller Server] ControllerPublishVolume
  should fail when the volume does not exist
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:940
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:02.698
STEP: creating mount and staging directories 08/08/22 10:21:02.699
------------------------------
S [SKIPPED] [0.671 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ControllerPublishVolume [BeforeEach]
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:846
    should fail when the volume does not exist
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:940

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:02.698
    STEP: creating mount and staging directories 08/08/22 10:21:02.699
  << End Captured GinkgoWriter Output

  ControllerPublishVolume not supported
  In [BeforeEach] at: /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:848
------------------------------
Controller Service [Controller Server] ControllerPublishVolume
  should fail when the node does not exist
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:962
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:03.37
STEP: creating mount and staging directories 08/08/22 10:21:03.37
------------------------------
S [SKIPPED] [0.677 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ControllerPublishVolume [BeforeEach]
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:846
    should fail when the node does not exist
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:962

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:03.37
    STEP: creating mount and staging directories 08/08/22 10:21:03.37
  << End Captured GinkgoWriter Output

  ControllerPublishVolume not supported
  In [BeforeEach] at: /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:848
------------------------------
Controller Service [Controller Server] ControllerPublishVolume
  should fail when the volume is already published but is incompatible
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1001
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:04.047
STEP: creating mount and staging directories 08/08/22 10:21:04.047
------------------------------
S [SKIPPED] [0.715 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ControllerPublishVolume [BeforeEach]
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:846
    should fail when the volume is already published but is incompatible
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1001

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:04.047
    STEP: creating mount and staging directories 08/08/22 10:21:04.047
  << End Captured GinkgoWriter Output
... skipping 43 lines ...
  << End Captured GinkgoWriter Output

  Controller Publish, UnpublishVolume not supported
  In [BeforeEach] at: /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1059
------------------------------
Controller Service [Controller Server] ControllerUnpublishVolume
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1079
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:06.159
STEP: creating mount and staging directories 08/08/22 10:21:06.159
------------------------------
S [SKIPPED] [0.692 seconds]
Controller Service [Controller Server]
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  ControllerUnpublishVolume [BeforeEach]
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1073
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1079

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:06.159
    STEP: creating mount and staging directories 08/08/22 10:21:06.159
  << End Captured GinkgoWriter Output
... skipping 39 lines ...
  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:07.561
    STEP: creating mount and staging directories 08/08/22 10:21:07.561
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodePublishVolume
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:379
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:08.284
STEP: creating mount and staging directories 08/08/22 10:21:08.284
------------------------------
• [0.671 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodePublishVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:378
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:379

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:08.284
    STEP: creating mount and staging directories 08/08/22 10:21:08.284
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodePublishVolume
  should fail when no target path is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:393
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:08.955
STEP: creating mount and staging directories 08/08/22 10:21:08.955
------------------------------
• [0.785 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodePublishVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:378
    should fail when no target path is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:393

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:08.955
    STEP: creating mount and staging directories 08/08/22 10:21:08.955
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodePublishVolume
  should fail when no volume capability is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:408
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:09.74
STEP: creating mount and staging directories 08/08/22 10:21:09.74
------------------------------
• [0.815 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodePublishVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:378
    should fail when no volume capability is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:408

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:09.74
    STEP: creating mount and staging directories 08/08/22 10:21:09.74
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeUnpublishVolume
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:427
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:10.556
STEP: creating mount and staging directories 08/08/22 10:21:10.556
------------------------------
• [0.739 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeUnpublishVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:426
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:427

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:10.556
    STEP: creating mount and staging directories 08/08/22 10:21:10.556
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeUnpublishVolume
  should fail when no target path is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:439
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:11.295
STEP: creating mount and staging directories 08/08/22 10:21:11.295
------------------------------
• [0.729 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeUnpublishVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:426
    should fail when no target path is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:439

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:11.295
    STEP: creating mount and staging directories 08/08/22 10:21:11.295
  << End Captured GinkgoWriter Output
... skipping 31 lines ...
    STEP: Checking the target path exists 08/08/22 10:21:12.378
    STEP: Unpublishing the volume 08/08/22 10:21:12.561
    STEP: Checking the target path was removed 08/08/22 10:21:12.564
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeStageVolume
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:525
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:13.076
STEP: creating mount and staging directories 08/08/22 10:21:13.076
------------------------------
• [0.685 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeStageVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:512
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:525

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:13.076
    STEP: creating mount and staging directories 08/08/22 10:21:13.076
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeStageVolume
  should fail when no staging target path is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:544
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:13.761
STEP: creating mount and staging directories 08/08/22 10:21:13.761
------------------------------
• [0.718 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeStageVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:512
    should fail when no staging target path is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:544

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:13.761
    STEP: creating mount and staging directories 08/08/22 10:21:13.761
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeStageVolume
  should fail when no volume capability is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:563
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:14.479
STEP: creating mount and staging directories 08/08/22 10:21:14.48
STEP: creating a single node writer volume 08/08/22 10:21:14.834
------------------------------
• [0.692 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeStageVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:512
    should fail when no volume capability is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:563

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:14.479
    STEP: creating mount and staging directories 08/08/22 10:21:14.48
    STEP: creating a single node writer volume 08/08/22 10:21:14.834
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeUnstageVolume
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:614
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:15.172
STEP: creating mount and staging directories 08/08/22 10:21:15.172
------------------------------
• [0.694 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeUnstageVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:607
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:614

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:15.172
    STEP: creating mount and staging directories 08/08/22 10:21:15.172
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeUnstageVolume
  should fail when no staging target path is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:628
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:15.866
STEP: creating mount and staging directories 08/08/22 10:21:15.867
------------------------------
• [0.693 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeUnstageVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:607
    should fail when no staging target path is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:628

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:15.866
    STEP: creating mount and staging directories 08/08/22 10:21:15.867
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeGetVolumeStats
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:650
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:16.559
STEP: creating mount and staging directories 08/08/22 10:21:16.559
------------------------------
• [0.669 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeGetVolumeStats
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:643
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:650

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:16.559
    STEP: creating mount and staging directories 08/08/22 10:21:16.559
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeGetVolumeStats
  should fail when no volume path is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:664
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:17.228
STEP: creating mount and staging directories 08/08/22 10:21:17.228
------------------------------
• [0.690 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeGetVolumeStats
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:643
    should fail when no volume path is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:664

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:17.228
    STEP: creating mount and staging directories 08/08/22 10:21:17.228
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeGetVolumeStats
  should fail when volume is not found
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:678
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:17.918
STEP: creating mount and staging directories 08/08/22 10:21:17.918
------------------------------
• [1.040 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeGetVolumeStats
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:643
    should fail when volume is not found
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:678

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:17.918
    STEP: creating mount and staging directories 08/08/22 10:21:17.918
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeGetVolumeStats
  should fail when volume does not exist on the specified path
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:693
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:18.959
STEP: creating mount and staging directories 08/08/22 10:21:18.959
STEP: creating a single node writer volume for expansion 08/08/22 10:21:19.481
STEP: getting a node id 08/08/22 10:21:19.485
STEP: node staging volume 08/08/22 10:21:19.529
... skipping 2 lines ...
------------------------------
• [1.181 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeGetVolumeStats
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:643
    should fail when volume does not exist on the specified path
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:693

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:18.959
    STEP: creating mount and staging directories 08/08/22 10:21:18.959
    STEP: creating a single node writer volume for expansion 08/08/22 10:21:19.481
    STEP: getting a node id 08/08/22 10:21:19.485
    STEP: node staging volume 08/08/22 10:21:19.529
    STEP: publishing the volume on a node 08/08/22 10:21:19.531
    STEP: Get node volume stats 08/08/22 10:21:19.541
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeExpandVolume
  should fail when no volume id is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:740
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:20.14
STEP: creating mount and staging directories 08/08/22 10:21:20.141
------------------------------
• [0.783 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeExpandVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:732
    should fail when no volume id is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:740

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:20.14
    STEP: creating mount and staging directories 08/08/22 10:21:20.141
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeExpandVolume
  should fail when no volume path is provided
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:755
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:20.923
STEP: creating mount and staging directories 08/08/22 10:21:20.924
STEP: creating a single node writer volume for expansion 08/08/22 10:21:21.273
------------------------------
• [0.733 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeExpandVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:732
    should fail when no volume path is provided
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:755

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:20.923
    STEP: creating mount and staging directories 08/08/22 10:21:20.924
    STEP: creating a single node writer volume for expansion 08/08/22 10:21:21.273
  << End Captured GinkgoWriter Output
------------------------------
Node Service NodeExpandVolume
  should fail when volume is not found
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:774
STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:21.657
STEP: creating mount and staging directories 08/08/22 10:21:21.657
------------------------------
• [0.702 seconds]
Node Service
/home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:45
  NodeExpandVolume
  /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:732
    should fail when volume is not found
    /home/prow/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:774

  Begin Captured GinkgoWriter Output >>
    STEP: reusing connection to CSI driver at dns:///172.18.0.3:31752 08/08/22 10:21:21.657
    STEP: creating mount and staging directories 08/08/22 10:21:21.657
  << End Captured GinkgoWriter Output
... skipping 129 lines ...
[ReportAfterSuite] PASSED [0.003 seconds]
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
------------------------------

Ran 67 of 78 Specs in 57.190 seconds
SUCCESS! -- 67 Passed | 0 Failed | 1 Pending | 10 Skipped
Mon Aug  8 10:21:24 UTC 2022 go1.19 $ git init /home/prow/go/src/k8s.io/kubernetes
Initialized empty Git repository in /home/prow/go/src/k8s.io/kubernetes/.git/
Mon Aug  8 10:21:24 UTC 2022 go1.19 /home/prow/go/src/k8s.io/kubernetes$ git fetch --depth=1 https://github.com/kubernetes/kubernetes v1.21.0
From https://github.com/kubernetes/kubernetes
 * tag                 v1.21.0    -> FETCH_HEAD
Mon Aug  8 10:21:36 UTC 2022 go1.19 /home/prow/go/src/k8s.io/kubernetes$ git checkout FETCH_HEAD
... skipping 11 lines ...
HEAD is now at cb303e61 Release commit for Kubernetes v1.21.0
Mon Aug  8 10:21:38 UTC 2022 go1.19 /home/prow/go/src/k8s.io/kubernetes$ git clean -fdx

Using a modified version of k/k/test/e2e:


Mon Aug  8 10:21:39 UTC 2022 go1.19 $ curl --fail --location https://dl.google.com/go/go1.16.linux-amd64.tar.gz
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
 16  123M   16 19.7M    0     0  28.4M      0  0:00:04 --:--:--  0:00:04 28.4M
 41  123M   41 51.0M    0     0  30.1M      0  0:00:04  0:00:01  0:00:03 30.1M
 80  123M   80 99.6M    0     0  37.3M      0  0:00:03  0:00:02  0:00:01 37.2M
100  123M  100  123M    0     0  35.1M      0  0:00:03  0:00:03 --:--:-- 35.1M
Mon Aug  8 10:21:43 UTC 2022 go1.16 $ make WHAT=test/e2e/e2e.test -C/home/prow/go/src/k8s.io/kubernetes
make: Entering directory '/home/prow/go/src/k8s.io/kubernetes'
make[1]: Entering directory '/home/prow/go/src/k8s.io/kubernetes'
... skipping 293 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278

    Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 282 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267

    Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 272 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Inline-volume (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256

    Driver "hostpath.csi.k8s.io" does not support volume type "InlineVolume" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 8 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256

    Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 8 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Inline-volume (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240

    Driver "hostpath.csi.k8s.io" does not support volume type "InlineVolume" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 8 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to use a volume in a pod with mismatched mode [Slow] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296

    Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 155 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Inline-volume (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278

    Driver "hostpath.csi.k8s.io" does not support volume type "InlineVolume" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 102 lines ...
STEP: Creating a kubernetes client
Aug  8 10:27:50.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
W0808 10:27:51.213359   64788 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Aug  8 10:27:51.213: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Aug  8 10:27:51.306: INFO: Driver didn't provide topology keys -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  8 10:27:51.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "topology-3736" for this suite.


S [SKIPPING] [0.912 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (delayed binding)] topology
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

    Driver didn't provide topology keys -- skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:124
------------------------------
... skipping 29 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240

    Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 153 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to use a volume in a pod with mismatched mode [Slow] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296

    Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 378 lines ...
STEP: Creating a kubernetes client
Aug  8 10:27:51.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
W0808 10:27:52.710562   64870 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Aug  8 10:27:52.710: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Aug  8 10:27:52.713: INFO: Driver didn't provide topology keys -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  8 10:27:52.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "topology-5008" for this suite.


S [SKIPPING] [1.434 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (immediate binding)] topology
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

    Driver didn't provide topology keys -- skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:124
------------------------------
... skipping 418 lines ...
Aug  8 10:27:54.974: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iown8lt] to have phase Bound
Aug  8 10:27:54.977: INFO: PersistentVolumeClaim hostpath.csi.k8s.iown8lt found but phase is Pending instead of Bound.
Aug  8 10:27:56.982: INFO: PersistentVolumeClaim hostpath.csi.k8s.iown8lt found but phase is Pending instead of Bound.
Aug  8 10:27:58.989: INFO: PersistentVolumeClaim hostpath.csi.k8s.iown8lt found and phase=Bound (4.014875259s)
STEP: Expanding non-expandable pvc
Aug  8 10:27:58.995: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Aug  8 10:27:59.001: INFO: Error updating pvc hostpath.csi.k8s.iown8lt: persistentvolumeclaims "hostpath.csi.k8s.iown8lt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:01.010: INFO: Error updating pvc hostpath.csi.k8s.iown8lt: persistentvolumeclaims "hostpath.csi.k8s.iown8lt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:03.009: INFO: Error updating pvc hostpath.csi.k8s.iown8lt: persistentvolumeclaims "hostpath.csi.k8s.iown8lt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:05.013: INFO: Error updating pvc hostpath.csi.k8s.iown8lt: persistentvolumeclaims "hostpath.csi.k8s.iown8lt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:07.010: INFO: Error updating pvc hostpath.csi.k8s.iown8lt: persistentvolumeclaims "hostpath.csi.k8s.iown8lt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:09.009: INFO: Error updating pvc hostpath.csi.k8s.iown8lt: persistentvolumeclaims "hostpath.csi.k8s.iown8lt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:11.076: INFO: Error updating pvc hostpath.csi.k8s.iown8lt: persistentvolumeclaims "hostpath.csi.k8s.iown8lt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:13.009: INFO: Error updating pvc hostpath.csi.k8s.iown8lt: persistentvolumeclaims "hostpath.csi.k8s.iown8lt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:15.009: INFO: Error updating pvc hostpath.csi.k8s.iown8lt: persistentvolumeclaims "hostpath.csi.k8s.iown8lt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:17.010: INFO: Error updating pvc hostpath.csi.k8s.iown8lt: persistentvolumeclaims "hostpath.csi.k8s.iown8lt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:19.011: INFO: Error updating pvc hostpath.csi.k8s.iown8lt: persistentvolumeclaims "hostpath.csi.k8s.iown8lt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:21.012: INFO: Error updating pvc hostpath.csi.k8s.iown8lt: persistentvolumeclaims "hostpath.csi.k8s.iown8lt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:23.011: INFO: Error updating pvc hostpath.csi.k8s.iown8lt: persistentvolumeclaims "hostpath.csi.k8s.iown8lt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:25.011: INFO: Error updating pvc hostpath.csi.k8s.iown8lt: persistentvolumeclaims "hostpath.csi.k8s.iown8lt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:27.010: INFO: Error updating pvc hostpath.csi.k8s.iown8lt: persistentvolumeclaims "hostpath.csi.k8s.iown8lt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:29.012: INFO: Error updating pvc hostpath.csi.k8s.iown8lt: persistentvolumeclaims "hostpath.csi.k8s.iown8lt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:29.020: INFO: Error updating pvc hostpath.csi.k8s.iown8lt: persistentvolumeclaims "hostpath.csi.k8s.iown8lt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Aug  8 10:28:29.020: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.iown8lt"
Aug  8 10:28:29.025: INFO: Waiting up to 5m0s for PersistentVolume pvc-71a00003-1caf-4947-8c70-ddcadf5c2de3 to get deleted
Aug  8 10:28:29.029: INFO: PersistentVolume pvc-71a00003-1caf-4947-8c70-ddcadf5c2de3 found and phase=Bound (2.914331ms)
Aug  8 10:28:34.032: INFO: PersistentVolume pvc-71a00003-1caf-4947-8c70-ddcadf5c2de3 was removed
STEP: Deleting sc
... skipping 8 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] volume-expand
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should not allow expansion of pvcs without AllowVolumeExpansion property
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":1,"skipped":167,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] volume-expand 
  should not allow expansion of pvcs without AllowVolumeExpansion property
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
... skipping 16 lines ...
Aug  8 10:27:57.526: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iodqllf] to have phase Bound
Aug  8 10:27:57.528: INFO: PersistentVolumeClaim hostpath.csi.k8s.iodqllf found but phase is Pending instead of Bound.
Aug  8 10:27:59.532: INFO: PersistentVolumeClaim hostpath.csi.k8s.iodqllf found but phase is Pending instead of Bound.
Aug  8 10:28:01.537: INFO: PersistentVolumeClaim hostpath.csi.k8s.iodqllf found and phase=Bound (4.011687486s)
STEP: Expanding non-expandable pvc
Aug  8 10:28:01.542: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Aug  8 10:28:01.548: INFO: Error updating pvc hostpath.csi.k8s.iodqllf: persistentvolumeclaims "hostpath.csi.k8s.iodqllf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:03.556: INFO: Error updating pvc hostpath.csi.k8s.iodqllf: persistentvolumeclaims "hostpath.csi.k8s.iodqllf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:05.557: INFO: Error updating pvc hostpath.csi.k8s.iodqllf: persistentvolumeclaims "hostpath.csi.k8s.iodqllf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:07.556: INFO: Error updating pvc hostpath.csi.k8s.iodqllf: persistentvolumeclaims "hostpath.csi.k8s.iodqllf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:09.561: INFO: Error updating pvc hostpath.csi.k8s.iodqllf: persistentvolumeclaims "hostpath.csi.k8s.iodqllf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:11.557: INFO: Error updating pvc hostpath.csi.k8s.iodqllf: persistentvolumeclaims "hostpath.csi.k8s.iodqllf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:13.557: INFO: Error updating pvc hostpath.csi.k8s.iodqllf: persistentvolumeclaims "hostpath.csi.k8s.iodqllf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:15.556: INFO: Error updating pvc hostpath.csi.k8s.iodqllf: persistentvolumeclaims "hostpath.csi.k8s.iodqllf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:17.557: INFO: Error updating pvc hostpath.csi.k8s.iodqllf: persistentvolumeclaims "hostpath.csi.k8s.iodqllf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:19.556: INFO: Error updating pvc hostpath.csi.k8s.iodqllf: persistentvolumeclaims "hostpath.csi.k8s.iodqllf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:21.557: INFO: Error updating pvc hostpath.csi.k8s.iodqllf: persistentvolumeclaims "hostpath.csi.k8s.iodqllf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:23.556: INFO: Error updating pvc hostpath.csi.k8s.iodqllf: persistentvolumeclaims "hostpath.csi.k8s.iodqllf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:25.557: INFO: Error updating pvc hostpath.csi.k8s.iodqllf: persistentvolumeclaims "hostpath.csi.k8s.iodqllf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:27.562: INFO: Error updating pvc hostpath.csi.k8s.iodqllf: persistentvolumeclaims "hostpath.csi.k8s.iodqllf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:29.558: INFO: Error updating pvc hostpath.csi.k8s.iodqllf: persistentvolumeclaims "hostpath.csi.k8s.iodqllf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:31.556: INFO: Error updating pvc hostpath.csi.k8s.iodqllf: persistentvolumeclaims "hostpath.csi.k8s.iodqllf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  8 10:28:31.563: INFO: Error updating pvc hostpath.csi.k8s.iodqllf: persistentvolumeclaims "hostpath.csi.k8s.iodqllf" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Aug  8 10:28:31.563: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.iodqllf"
Aug  8 10:28:31.569: INFO: Waiting up to 5m0s for PersistentVolume pvc-a84c37fc-9e4a-4d3f-ac52-df1932b2d59f to get deleted
Aug  8 10:28:31.575: INFO: PersistentVolume pvc-a84c37fc-9e4a-4d3f-ac52-df1932b2d59f found and phase=Bound (5.86807ms)
Aug  8 10:28:36.579: INFO: PersistentVolume pvc-a84c37fc-9e4a-4d3f-ac52-df1932b2d59f was removed
STEP: Deleting sc
... skipping 8 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] volume-expand
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should not allow expansion of pvcs without AllowVolumeExpansion property
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":1,"skipped":207,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  8 10:28:36.605: INFO: Driver hostpath.csi.k8s.io doesn't support ext3 -- skipping
... skipping 76 lines ...

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode 
  should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296

[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  8 10:27:50.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volumemode
W0808 10:27:51.313163   64793 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Aug  8 10:27:51.313: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296
Aug  8 10:27:51.365: INFO: Creating resource for dynamic PV
Aug  8 10:27:51.365: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(hostpath.csi.k8s.io) supported size:{ 1Mi} 
STEP: creating a StorageClass volumemode-5926-e2e-sclgn67
STEP: creating a claim
Aug  8 10:27:51.657: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.ioln27c] to have phase Bound
Aug  8 10:27:51.704: INFO: PersistentVolumeClaim hostpath.csi.k8s.ioln27c found but phase is Pending instead of Bound.
Aug  8 10:27:53.707: INFO: PersistentVolumeClaim hostpath.csi.k8s.ioln27c found and phase=Bound (2.050182313s)
STEP: Creating pod
STEP: Waiting for the pod to fail
Aug  8 10:27:59.730: INFO: Deleting pod "pod-212c0b0c-c095-4f8a-83f3-d91c660d3acb" in namespace "volumemode-5926"
Aug  8 10:27:59.735: INFO: Wait up to 5m0s for pod "pod-212c0b0c-c095-4f8a-83f3-d91c660d3acb" to be fully deleted
STEP: Deleting pvc
Aug  8 10:28:49.743: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.ioln27c"
Aug  8 10:28:49.749: INFO: Waiting up to 5m0s for PersistentVolume pvc-a10ee685-0d04-4851-8a13-fdb32c6c1e9b to get deleted
Aug  8 10:28:49.755: INFO: PersistentVolume pvc-a10ee685-0d04-4851-8a13-fdb32c6c1e9b found and phase=Bound (6.520508ms)
... skipping 7 lines ...

• [SLOW TEST:64.115 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to use a volume in a pod with mismatched mode [Slow]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]","total":-1,"completed":1,"skipped":37,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] volumes 
  should allow exec of files on the volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
... skipping 17 lines ...
Aug  8 10:27:50.948: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 10:27:51.095: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iork2pr] to have phase Bound
Aug  8 10:27:51.153: INFO: PersistentVolumeClaim hostpath.csi.k8s.iork2pr found but phase is Pending instead of Bound.
Aug  8 10:27:53.157: INFO: PersistentVolumeClaim hostpath.csi.k8s.iork2pr found and phase=Bound (2.061866725s)
STEP: Creating pod exec-volume-test-dynamicpv-lmpf
STEP: Creating a pod to test exec-volume-test
Aug  8 10:27:53.168: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-lmpf" in namespace "volume-7077" to be "Succeeded or Failed"
Aug  8 10:27:53.174: INFO: Pod "exec-volume-test-dynamicpv-lmpf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.618026ms
Aug  8 10:27:55.177: INFO: Pod "exec-volume-test-dynamicpv-lmpf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008890775s
Aug  8 10:27:57.182: INFO: Pod "exec-volume-test-dynamicpv-lmpf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013546434s
Aug  8 10:27:59.189: INFO: Pod "exec-volume-test-dynamicpv-lmpf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020988682s
Aug  8 10:28:01.193: INFO: Pod "exec-volume-test-dynamicpv-lmpf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.02510495s
Aug  8 10:28:03.197: INFO: Pod "exec-volume-test-dynamicpv-lmpf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.028807455s
... skipping 26 lines ...
Aug  8 10:28:57.319: INFO: Pod "exec-volume-test-dynamicpv-lmpf": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.150359901s
Aug  8 10:28:59.322: INFO: Pod "exec-volume-test-dynamicpv-lmpf": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.153571347s
Aug  8 10:29:01.327: INFO: Pod "exec-volume-test-dynamicpv-lmpf": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.15838823s
Aug  8 10:29:03.330: INFO: Pod "exec-volume-test-dynamicpv-lmpf": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.161613166s
Aug  8 10:29:05.333: INFO: Pod "exec-volume-test-dynamicpv-lmpf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m12.16471991s
STEP: Saw pod success
Aug  8 10:29:05.333: INFO: Pod "exec-volume-test-dynamicpv-lmpf" satisfied condition "Succeeded or Failed"
Aug  8 10:29:05.336: INFO: Trying to get logs from node csi-prow-worker pod exec-volume-test-dynamicpv-lmpf container exec-container-dynamicpv-lmpf: <nil>
STEP: delete the pod
Aug  8 10:29:05.351: INFO: Waiting for pod exec-volume-test-dynamicpv-lmpf to disappear
Aug  8 10:29:05.353: INFO: Pod exec-volume-test-dynamicpv-lmpf no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-lmpf
Aug  8 10:29:05.354: INFO: Deleting pod "exec-volume-test-dynamicpv-lmpf" in namespace "volume-7077"
... skipping 14 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should allow exec of files on the volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":22,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  8 10:29:10.475: INFO: Driver "hostpath.csi.k8s.io" does not support volume type "InlineVolume" - skipping
... skipping 3 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Inline-volume (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267

    Driver "hostpath.csi.k8s.io" does not support volume type "InlineVolume" - skipping

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 20 lines ...
Aug  8 10:27:53.764: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 10:27:53.769: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.ioxx7n2] to have phase Bound
Aug  8 10:27:53.773: INFO: PersistentVolumeClaim hostpath.csi.k8s.ioxx7n2 found but phase is Pending instead of Bound.
Aug  8 10:27:55.776: INFO: PersistentVolumeClaim hostpath.csi.k8s.ioxx7n2 found and phase=Bound (2.006758842s)
STEP: Creating pod pod-subpath-test-dynamicpv-rpt5
STEP: Creating a pod to test subpath
Aug  8 10:27:55.794: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-rpt5" in namespace "provisioning-4417" to be "Succeeded or Failed"
Aug  8 10:27:55.796: INFO: Pod "pod-subpath-test-dynamicpv-rpt5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.612796ms
Aug  8 10:27:57.800: INFO: Pod "pod-subpath-test-dynamicpv-rpt5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005920084s
Aug  8 10:27:59.806: INFO: Pod "pod-subpath-test-dynamicpv-rpt5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012018323s
Aug  8 10:28:01.810: INFO: Pod "pod-subpath-test-dynamicpv-rpt5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016018479s
Aug  8 10:28:03.813: INFO: Pod "pod-subpath-test-dynamicpv-rpt5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019315763s
Aug  8 10:28:05.816: INFO: Pod "pod-subpath-test-dynamicpv-rpt5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022688423s
... skipping 39 lines ...
Aug  8 10:29:25.999: INFO: Pod "pod-subpath-test-dynamicpv-rpt5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.205760358s
Aug  8 10:29:28.006: INFO: Pod "pod-subpath-test-dynamicpv-rpt5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.212315706s
Aug  8 10:29:30.013: INFO: Pod "pod-subpath-test-dynamicpv-rpt5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.219639461s
Aug  8 10:29:32.019: INFO: Pod "pod-subpath-test-dynamicpv-rpt5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.225333182s
Aug  8 10:29:34.024: INFO: Pod "pod-subpath-test-dynamicpv-rpt5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m38.230160497s
STEP: Saw pod success
Aug  8 10:29:34.024: INFO: Pod "pod-subpath-test-dynamicpv-rpt5" satisfied condition "Succeeded or Failed"
Aug  8 10:29:34.027: INFO: Trying to get logs from node csi-prow-worker pod pod-subpath-test-dynamicpv-rpt5 container test-container-subpath-dynamicpv-rpt5: <nil>
STEP: delete the pod
Aug  8 10:29:34.043: INFO: Waiting for pod pod-subpath-test-dynamicpv-rpt5 to disappear
Aug  8 10:29:34.045: INFO: Pod pod-subpath-test-dynamicpv-rpt5 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-rpt5
Aug  8 10:29:34.046: INFO: Deleting pod "pod-subpath-test-dynamicpv-rpt5" in namespace "provisioning-4417"
... skipping 16 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support existing single file [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":1,"skipped":69,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  8 10:29:39.115: INFO: Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping
... skipping 89 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support multiple inline ephemeral volumes
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:211
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":1,"skipped":337,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  8 10:29:42.241: INFO: Driver "hostpath.csi.k8s.io" does not support volume type "PreprovisionedPV" - skipping
... skipping 13 lines ...

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  8 10:27:50.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
W0808 10:27:51.312387   64911 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Aug  8 10:27:51.312: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267
Aug  8 10:27:51.320: INFO: Creating resource for dynamic PV
Aug  8 10:27:51.321: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(hostpath.csi.k8s.io) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-5641-e2e-sct279q
STEP: creating a claim
Aug  8 10:27:51.366: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 10:27:51.589: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iojxpfn] to have phase Bound
Aug  8 10:27:51.669: INFO: PersistentVolumeClaim hostpath.csi.k8s.iojxpfn found but phase is Pending instead of Bound.
Aug  8 10:27:53.672: INFO: PersistentVolumeClaim hostpath.csi.k8s.iojxpfn found and phase=Bound (2.083221068s)
STEP: Creating pod pod-subpath-test-dynamicpv-rdpl
STEP: Checking for subpath error in container status
Aug  8 10:28:41.696: INFO: Deleting pod "pod-subpath-test-dynamicpv-rdpl" in namespace "provisioning-5641"
Aug  8 10:28:41.701: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-rdpl" to be fully deleted
STEP: Deleting pod
Aug  8 10:29:37.731: INFO: Deleting pod "pod-subpath-test-dynamicpv-rdpl" in namespace "provisioning-5641"
STEP: Deleting pvc
Aug  8 10:29:37.739: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.iojxpfn"
... skipping 9 lines ...

• [SLOW TEST:112.062 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]","total":-1,"completed":1,"skipped":10,"failed":0}

SSSSSSSSSSSSS
------------------------------
External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should support readOnly directory specified in the volumeMount
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
... skipping 17 lines ...
Aug  8 10:27:52.818: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 10:27:52.825: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io65zcw] to have phase Bound
Aug  8 10:27:52.828: INFO: PersistentVolumeClaim hostpath.csi.k8s.io65zcw found but phase is Pending instead of Bound.
Aug  8 10:27:54.832: INFO: PersistentVolumeClaim hostpath.csi.k8s.io65zcw found and phase=Bound (2.006556024s)
STEP: Creating pod pod-subpath-test-dynamicpv-mlcp
STEP: Creating a pod to test subpath
Aug  8 10:27:54.843: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-mlcp" in namespace "provisioning-611" to be "Succeeded or Failed"
Aug  8 10:27:54.847: INFO: Pod "pod-subpath-test-dynamicpv-mlcp": Phase="Pending", Reason="", readiness=false. Elapsed: 3.22453ms
Aug  8 10:27:56.851: INFO: Pod "pod-subpath-test-dynamicpv-mlcp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007986363s
Aug  8 10:27:58.856: INFO: Pod "pod-subpath-test-dynamicpv-mlcp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012685742s
Aug  8 10:28:00.859: INFO: Pod "pod-subpath-test-dynamicpv-mlcp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015617523s
Aug  8 10:28:02.863: INFO: Pod "pod-subpath-test-dynamicpv-mlcp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019497684s
Aug  8 10:28:04.867: INFO: Pod "pod-subpath-test-dynamicpv-mlcp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023527069s
... skipping 42 lines ...
Aug  8 10:29:31.066: INFO: Pod "pod-subpath-test-dynamicpv-mlcp": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.223023637s
Aug  8 10:29:33.071: INFO: Pod "pod-subpath-test-dynamicpv-mlcp": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.228149435s
Aug  8 10:29:35.076: INFO: Pod "pod-subpath-test-dynamicpv-mlcp": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.232645403s
Aug  8 10:29:37.081: INFO: Pod "pod-subpath-test-dynamicpv-mlcp": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.237512971s
Aug  8 10:29:39.090: INFO: Pod "pod-subpath-test-dynamicpv-mlcp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m44.246354512s
STEP: Saw pod success
Aug  8 10:29:39.090: INFO: Pod "pod-subpath-test-dynamicpv-mlcp" satisfied condition "Succeeded or Failed"
Aug  8 10:29:39.093: INFO: Trying to get logs from node csi-prow-worker pod pod-subpath-test-dynamicpv-mlcp container test-container-subpath-dynamicpv-mlcp: <nil>
STEP: delete the pod
Aug  8 10:29:39.122: INFO: Waiting for pod pod-subpath-test-dynamicpv-mlcp to disappear
Aug  8 10:29:39.130: INFO: Pod pod-subpath-test-dynamicpv-mlcp no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-mlcp
Aug  8 10:29:39.130: INFO: Deleting pod "pod-subpath-test-dynamicpv-mlcp" in namespace "provisioning-611"
... skipping 16 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support readOnly directory specified in the volumeMount
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":62,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  8 10:29:44.261: INFO: Driver "hostpath.csi.k8s.io" does not support volume type "InlineVolume" - skipping
... skipping 44 lines ...

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:240
------------------------------
SSSSS
------------------------------
External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] volumeMode 
  should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296

[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  8 10:27:52.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volumemode
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296
Aug  8 10:27:56.013: INFO: Creating resource for dynamic PV
Aug  8 10:27:56.013: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(hostpath.csi.k8s.io) supported size:{ 1Mi} 
STEP: creating a StorageClass volumemode-3408-e2e-scv58hc
STEP: creating a claim
Aug  8 10:27:56.027: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iowqkdl] to have phase Bound
Aug  8 10:27:56.033: INFO: PersistentVolumeClaim hostpath.csi.k8s.iowqkdl found but phase is Pending instead of Bound.
Aug  8 10:27:58.036: INFO: PersistentVolumeClaim hostpath.csi.k8s.iowqkdl found but phase is Pending instead of Bound.
Aug  8 10:28:00.041: INFO: PersistentVolumeClaim hostpath.csi.k8s.iowqkdl found and phase=Bound (4.014321687s)
STEP: Creating pod
STEP: Waiting for the pod to fail
Aug  8 10:28:34.068: INFO: Deleting pod "pod-7bfc3ea2-24b1-46e2-aabc-fe1f47c4eec4" in namespace "volumemode-3408"
Aug  8 10:28:34.073: INFO: Wait up to 5m0s for pod "pod-7bfc3ea2-24b1-46e2-aabc-fe1f47c4eec4" to be fully deleted
STEP: Deleting pvc
Aug  8 10:30:02.086: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.iowqkdl"
Aug  8 10:30:02.090: INFO: Waiting up to 5m0s for PersistentVolume pvc-66a7e717-6626-4848-94e6-2698bb4b0aef to get deleted
Aug  8 10:30:02.093: INFO: PersistentVolume pvc-66a7e717-6626-4848-94e6-2698bb4b0aef found and phase=Bound (2.991719ms)
... skipping 7 lines ...

• [SLOW TEST:134.677 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to use a volume in a pod with mismatched mode [Slow]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]","total":-1,"completed":1,"skipped":144,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  8 10:30:07.212: INFO: Driver "hostpath.csi.k8s.io" does not support volume type "InlineVolume" - skipping
... skipping 41 lines ...
Aug  8 10:27:52.142: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 10:27:52.216: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io894dx] to have phase Bound
Aug  8 10:27:52.261: INFO: PersistentVolumeClaim hostpath.csi.k8s.io894dx found but phase is Pending instead of Bound.
Aug  8 10:27:54.265: INFO: PersistentVolumeClaim hostpath.csi.k8s.io894dx found and phase=Bound (2.048768043s)
STEP: Creating pod pod-subpath-test-dynamicpv-js9j
STEP: Creating a pod to test subpath
Aug  8 10:27:54.291: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-js9j" in namespace "provisioning-7316" to be "Succeeded or Failed"
Aug  8 10:27:54.300: INFO: Pod "pod-subpath-test-dynamicpv-js9j": Phase="Pending", Reason="", readiness=false. Elapsed: 8.139614ms
Aug  8 10:27:56.310: INFO: Pod "pod-subpath-test-dynamicpv-js9j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018996283s
Aug  8 10:27:58.315: INFO: Pod "pod-subpath-test-dynamicpv-js9j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023745971s
Aug  8 10:28:00.320: INFO: Pod "pod-subpath-test-dynamicpv-js9j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028792812s
Aug  8 10:28:02.325: INFO: Pod "pod-subpath-test-dynamicpv-js9j": Phase="Pending", Reason="", readiness=false. Elapsed: 8.033293381s
Aug  8 10:28:04.329: INFO: Pod "pod-subpath-test-dynamicpv-js9j": Phase="Pending", Reason="", readiness=false. Elapsed: 10.037224636s
... skipping 56 lines ...
Aug  8 10:29:58.629: INFO: Pod "pod-subpath-test-dynamicpv-js9j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.337640464s
Aug  8 10:30:00.633: INFO: Pod "pod-subpath-test-dynamicpv-js9j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.341697084s
Aug  8 10:30:02.638: INFO: Pod "pod-subpath-test-dynamicpv-js9j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.346165575s
Aug  8 10:30:04.643: INFO: Pod "pod-subpath-test-dynamicpv-js9j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.351137922s
Aug  8 10:30:06.654: INFO: Pod "pod-subpath-test-dynamicpv-js9j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m12.362781772s
STEP: Saw pod success
Aug  8 10:30:06.654: INFO: Pod "pod-subpath-test-dynamicpv-js9j" satisfied condition "Succeeded or Failed"
Aug  8 10:30:06.659: INFO: Trying to get logs from node csi-prow-worker pod pod-subpath-test-dynamicpv-js9j container test-container-subpath-dynamicpv-js9j: <nil>
STEP: delete the pod
Aug  8 10:30:06.677: INFO: Waiting for pod pod-subpath-test-dynamicpv-js9j to disappear
Aug  8 10:30:06.680: INFO: Pod pod-subpath-test-dynamicpv-js9j no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-js9j
Aug  8 10:30:06.680: INFO: Deleting pod "pod-subpath-test-dynamicpv-js9j" in namespace "provisioning-7316"
... skipping 16 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support readOnly file specified in the volumeMount [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":29,"failed":0}
Aug  8 10:30:11.719: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should support creating multiple subpath from same volumes [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:294
... skipping 15 lines ...
Aug  8 10:28:37.048: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 10:28:37.055: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io785hq] to have phase Bound
Aug  8 10:28:37.058: INFO: PersistentVolumeClaim hostpath.csi.k8s.io785hq found but phase is Pending instead of Bound.
Aug  8 10:28:39.062: INFO: PersistentVolumeClaim hostpath.csi.k8s.io785hq found and phase=Bound (2.007549366s)
STEP: Creating pod pod-subpath-test-dynamicpv-clzk
STEP: Creating a pod to test multi_subpath
Aug  8 10:28:39.073: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-clzk" in namespace "provisioning-9131" to be "Succeeded or Failed"
Aug  8 10:28:39.077: INFO: Pod "pod-subpath-test-dynamicpv-clzk": Phase="Pending", Reason="", readiness=false. Elapsed: 3.466873ms
Aug  8 10:28:41.081: INFO: Pod "pod-subpath-test-dynamicpv-clzk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007349836s
Aug  8 10:28:43.085: INFO: Pod "pod-subpath-test-dynamicpv-clzk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011332291s
Aug  8 10:28:45.090: INFO: Pod "pod-subpath-test-dynamicpv-clzk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016196026s
Aug  8 10:28:47.096: INFO: Pod "pod-subpath-test-dynamicpv-clzk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021858239s
Aug  8 10:28:49.100: INFO: Pod "pod-subpath-test-dynamicpv-clzk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.026289504s
... skipping 35 lines ...
Aug  8 10:30:01.288: INFO: Pod "pod-subpath-test-dynamicpv-clzk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.214423809s
Aug  8 10:30:03.293: INFO: Pod "pod-subpath-test-dynamicpv-clzk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.219312006s
Aug  8 10:30:05.297: INFO: Pod "pod-subpath-test-dynamicpv-clzk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.223301171s
Aug  8 10:30:07.302: INFO: Pod "pod-subpath-test-dynamicpv-clzk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.228511383s
Aug  8 10:30:09.307: INFO: Pod "pod-subpath-test-dynamicpv-clzk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m30.23304838s
STEP: Saw pod success
Aug  8 10:30:09.307: INFO: Pod "pod-subpath-test-dynamicpv-clzk" satisfied condition "Succeeded or Failed"
Aug  8 10:30:09.310: INFO: Trying to get logs from node csi-prow-worker pod pod-subpath-test-dynamicpv-clzk container test-container-subpath-dynamicpv-clzk: <nil>
STEP: delete the pod
Aug  8 10:30:09.323: INFO: Waiting for pod pod-subpath-test-dynamicpv-clzk to disappear
Aug  8 10:30:09.325: INFO: Pod pod-subpath-test-dynamicpv-clzk no longer exists
STEP: Deleting pod
Aug  8 10:30:09.326: INFO: Deleting pod "pod-subpath-test-dynamicpv-clzk" in namespace "provisioning-9131"
... skipping 14 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support creating multiple subpath from same volumes [Slow]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:294
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]","total":-1,"completed":2,"skipped":487,"failed":0}
Aug  8 10:30:14.359: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should support non-existent path
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
... skipping 15 lines ...
Aug  8 10:29:10.537: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 10:29:10.546: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io5qs6z] to have phase Bound
Aug  8 10:29:10.554: INFO: PersistentVolumeClaim hostpath.csi.k8s.io5qs6z found but phase is Pending instead of Bound.
Aug  8 10:29:12.558: INFO: PersistentVolumeClaim hostpath.csi.k8s.io5qs6z found and phase=Bound (2.011415966s)
STEP: Creating pod pod-subpath-test-dynamicpv-jsqw
STEP: Creating a pod to test subpath
Aug  8 10:29:12.568: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-jsqw" in namespace "provisioning-1436" to be "Succeeded or Failed"
Aug  8 10:29:12.571: INFO: Pod "pod-subpath-test-dynamicpv-jsqw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.388862ms
Aug  8 10:29:14.576: INFO: Pod "pod-subpath-test-dynamicpv-jsqw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006832529s
Aug  8 10:29:16.580: INFO: Pod "pod-subpath-test-dynamicpv-jsqw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011359003s
Aug  8 10:29:18.584: INFO: Pod "pod-subpath-test-dynamicpv-jsqw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015265284s
Aug  8 10:29:20.588: INFO: Pod "pod-subpath-test-dynamicpv-jsqw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019526755s
Aug  8 10:29:22.593: INFO: Pod "pod-subpath-test-dynamicpv-jsqw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023838542s
... skipping 19 lines ...
Aug  8 10:30:02.720: INFO: Pod "pod-subpath-test-dynamicpv-jsqw": Phase="Pending", Reason="", readiness=false. Elapsed: 50.150973732s
Aug  8 10:30:04.725: INFO: Pod "pod-subpath-test-dynamicpv-jsqw": Phase="Pending", Reason="", readiness=false. Elapsed: 52.156057092s
Aug  8 10:30:06.731: INFO: Pod "pod-subpath-test-dynamicpv-jsqw": Phase="Pending", Reason="", readiness=false. Elapsed: 54.162066473s
Aug  8 10:30:08.734: INFO: Pod "pod-subpath-test-dynamicpv-jsqw": Phase="Pending", Reason="", readiness=false. Elapsed: 56.165329209s
Aug  8 10:30:10.738: INFO: Pod "pod-subpath-test-dynamicpv-jsqw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 58.169132827s
STEP: Saw pod success
Aug  8 10:30:10.738: INFO: Pod "pod-subpath-test-dynamicpv-jsqw" satisfied condition "Succeeded or Failed"
Aug  8 10:30:10.741: INFO: Trying to get logs from node csi-prow-worker pod pod-subpath-test-dynamicpv-jsqw container test-container-volume-dynamicpv-jsqw: <nil>
STEP: delete the pod
Aug  8 10:30:10.761: INFO: Waiting for pod pod-subpath-test-dynamicpv-jsqw to disappear
Aug  8 10:30:10.763: INFO: Pod pod-subpath-test-dynamicpv-jsqw no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-jsqw
Aug  8 10:30:10.763: INFO: Deleting pod "pod-subpath-test-dynamicpv-jsqw" in namespace "provisioning-1436"
... skipping 16 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support non-existent path
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":78,"failed":0}
Aug  8 10:30:15.792: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand 
  should resize volume when PVC is edited while pod is using it
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
... skipping 44 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should resize volume when PVC is edited while pod is using it
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":1,"skipped":151,"failed":0}
Aug  8 10:30:22.762: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if subpath directory is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  8 10:27:52.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
W0808 10:27:55.817099   64675 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Aug  8 10:27:55.817: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath directory is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240
Aug  8 10:27:55.820: INFO: Creating resource for dynamic PV
Aug  8 10:27:55.820: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(hostpath.csi.k8s.io) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-4459-e2e-scxwgdb
STEP: creating a claim
Aug  8 10:27:55.824: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 10:27:55.830: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iodp8j8] to have phase Bound
Aug  8 10:27:55.832: INFO: PersistentVolumeClaim hostpath.csi.k8s.iodp8j8 found but phase is Pending instead of Bound.
Aug  8 10:27:57.837: INFO: PersistentVolumeClaim hostpath.csi.k8s.iodp8j8 found but phase is Pending instead of Bound.
Aug  8 10:27:59.841: INFO: PersistentVolumeClaim hostpath.csi.k8s.iodp8j8 found and phase=Bound (4.011220146s)
STEP: Creating pod pod-subpath-test-dynamicpv-p25f
STEP: Checking for subpath error in container status
Aug  8 10:29:23.864: INFO: Deleting pod "pod-subpath-test-dynamicpv-p25f" in namespace "provisioning-4459"
Aug  8 10:29:23.872: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-p25f" to be fully deleted
STEP: Deleting pod
Aug  8 10:30:19.881: INFO: Deleting pod "pod-subpath-test-dynamicpv-p25f" in namespace "provisioning-4459"
STEP: Deleting pvc
Aug  8 10:30:19.884: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.iodp8j8"
... skipping 9 lines ...

• [SLOW TEST:152.722 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath directory is outside the volume [Slow][LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]","total":-1,"completed":1,"skipped":297,"failed":0}
Aug  8 10:30:24.909: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should support existing directory
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
... skipping 15 lines ...
Aug  8 10:28:34.182: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 10:28:34.187: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io576nn] to have phase Bound
Aug  8 10:28:34.190: INFO: PersistentVolumeClaim hostpath.csi.k8s.io576nn found but phase is Pending instead of Bound.
Aug  8 10:28:36.196: INFO: PersistentVolumeClaim hostpath.csi.k8s.io576nn found and phase=Bound (2.00971138s)
STEP: Creating pod pod-subpath-test-dynamicpv-dkmj
STEP: Creating a pod to test subpath
Aug  8 10:28:36.208: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-dkmj" in namespace "provisioning-1499" to be "Succeeded or Failed"
Aug  8 10:28:36.211: INFO: Pod "pod-subpath-test-dynamicpv-dkmj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.872759ms
Aug  8 10:28:38.215: INFO: Pod "pod-subpath-test-dynamicpv-dkmj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007079508s
Aug  8 10:28:40.220: INFO: Pod "pod-subpath-test-dynamicpv-dkmj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011621134s
Aug  8 10:28:42.225: INFO: Pod "pod-subpath-test-dynamicpv-dkmj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016508319s
Aug  8 10:28:44.229: INFO: Pod "pod-subpath-test-dynamicpv-dkmj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021208796s
Aug  8 10:28:46.234: INFO: Pod "pod-subpath-test-dynamicpv-dkmj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025725492s
... skipping 44 lines ...
Aug  8 10:30:16.479: INFO: Pod "pod-subpath-test-dynamicpv-dkmj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.270335178s
Aug  8 10:30:18.482: INFO: Pod "pod-subpath-test-dynamicpv-dkmj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.274210391s
Aug  8 10:30:20.487: INFO: Pod "pod-subpath-test-dynamicpv-dkmj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.278371334s
Aug  8 10:30:22.492: INFO: Pod "pod-subpath-test-dynamicpv-dkmj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.283292204s
Aug  8 10:30:24.495: INFO: Pod "pod-subpath-test-dynamicpv-dkmj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m48.287249121s
STEP: Saw pod success
Aug  8 10:30:24.496: INFO: Pod "pod-subpath-test-dynamicpv-dkmj" satisfied condition "Succeeded or Failed"
Aug  8 10:30:24.499: INFO: Trying to get logs from node csi-prow-worker pod pod-subpath-test-dynamicpv-dkmj container test-container-volume-dynamicpv-dkmj: <nil>
STEP: delete the pod
Aug  8 10:30:24.515: INFO: Waiting for pod pod-subpath-test-dynamicpv-dkmj to disappear
Aug  8 10:30:24.518: INFO: Pod pod-subpath-test-dynamicpv-dkmj no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-dkmj
Aug  8 10:30:24.518: INFO: Deleting pod "pod-subpath-test-dynamicpv-dkmj" in namespace "provisioning-1499"
... skipping 16 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support existing directory
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":2,"skipped":226,"failed":0}
Aug  8 10:30:29.553: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral 
  should create read-only inline ephemeral volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
... skipping 38 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should create read-only inline ephemeral volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":1,"skipped":72,"failed":0}
Aug  8 10:30:30.966: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral 
  should create read-only inline ephemeral volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
... skipping 85 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should create read/write inline ephemeral volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:161
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":1,"skipped":39,"failed":0}
Aug  8 10:30:36.832: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral 
  should create read/write inline ephemeral volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:161
... skipping 27 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should create read/write inline ephemeral volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:161
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":1,"skipped":233,"failed":0}
Aug  8 10:30:39.557: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should support file as subpath [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
... skipping 17 lines ...
Aug  8 10:27:52.082: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 10:27:52.167: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io5ljjk] to have phase Bound
Aug  8 10:27:52.198: INFO: PersistentVolumeClaim hostpath.csi.k8s.io5ljjk found but phase is Pending instead of Bound.
Aug  8 10:27:54.202: INFO: PersistentVolumeClaim hostpath.csi.k8s.io5ljjk found and phase=Bound (2.035679772s)
STEP: Creating pod pod-subpath-test-dynamicpv-mlqs
STEP: Creating a pod to test atomic-volume-subpath
Aug  8 10:27:54.217: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-mlqs" in namespace "provisioning-2101" to be "Succeeded or Failed"
Aug  8 10:27:54.220: INFO: Pod "pod-subpath-test-dynamicpv-mlqs": Phase="Pending", Reason="", readiness=false. Elapsed: 3.348405ms
Aug  8 10:27:56.224: INFO: Pod "pod-subpath-test-dynamicpv-mlqs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007288804s
Aug  8 10:27:58.228: INFO: Pod "pod-subpath-test-dynamicpv-mlqs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010684483s
Aug  8 10:28:00.231: INFO: Pod "pod-subpath-test-dynamicpv-mlqs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013725389s
Aug  8 10:28:02.236: INFO: Pod "pod-subpath-test-dynamicpv-mlqs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018391003s
Aug  8 10:28:04.240: INFO: Pod "pod-subpath-test-dynamicpv-mlqs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022939861s
... skipping 71 lines ...
Aug  8 10:30:28.626: INFO: Pod "pod-subpath-test-dynamicpv-mlqs": Phase="Running", Reason="", readiness=true. Elapsed: 2m34.408711674s
Aug  8 10:30:30.630: INFO: Pod "pod-subpath-test-dynamicpv-mlqs": Phase="Running", Reason="", readiness=true. Elapsed: 2m36.413125968s
Aug  8 10:30:32.635: INFO: Pod "pod-subpath-test-dynamicpv-mlqs": Phase="Running", Reason="", readiness=true. Elapsed: 2m38.417479004s
Aug  8 10:30:34.638: INFO: Pod "pod-subpath-test-dynamicpv-mlqs": Phase="Running", Reason="", readiness=true. Elapsed: 2m40.421040532s
Aug  8 10:30:36.643: INFO: Pod "pod-subpath-test-dynamicpv-mlqs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m42.426058766s
STEP: Saw pod success
Aug  8 10:30:36.643: INFO: Pod "pod-subpath-test-dynamicpv-mlqs" satisfied condition "Succeeded or Failed"
Aug  8 10:30:36.646: INFO: Trying to get logs from node csi-prow-worker pod pod-subpath-test-dynamicpv-mlqs container test-container-subpath-dynamicpv-mlqs: <nil>
STEP: delete the pod
Aug  8 10:30:36.658: INFO: Waiting for pod pod-subpath-test-dynamicpv-mlqs to disappear
Aug  8 10:30:36.662: INFO: Pod pod-subpath-test-dynamicpv-mlqs no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-mlqs
Aug  8 10:30:36.662: INFO: Deleting pod "pod-subpath-test-dynamicpv-mlqs" in namespace "provisioning-2101"
... skipping 16 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support file as subpath [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":41,"failed":0}
Aug  8 10:30:41.694: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should be able to unmount after the subpath directory is deleted [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
... skipping 43 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should be able to unmount after the subpath directory is deleted [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":1,"skipped":130,"failed":0}
Aug  8 10:30:46.281: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should verify container cannot write to subpath readonly volumes [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:422
... skipping 16 lines ...
STEP: creating a claim
Aug  8 10:27:51.645: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 10:27:51.814: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iospwct] to have phase Bound
Aug  8 10:27:51.935: INFO: PersistentVolumeClaim hostpath.csi.k8s.iospwct found but phase is Pending instead of Bound.
Aug  8 10:27:53.938: INFO: PersistentVolumeClaim hostpath.csi.k8s.iospwct found and phase=Bound (2.1245641s)
STEP: Creating pod to format volume volume-prep-provisioning-6651
Aug  8 10:27:53.950: INFO: Waiting up to 5m0s for pod "volume-prep-provisioning-6651" in namespace "provisioning-6651" to be "Succeeded or Failed"
Aug  8 10:27:53.955: INFO: Pod "volume-prep-provisioning-6651": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15285ms
Aug  8 10:27:55.962: INFO: Pod "volume-prep-provisioning-6651": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011453345s
Aug  8 10:27:57.966: INFO: Pod "volume-prep-provisioning-6651": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015968027s
Aug  8 10:27:59.973: INFO: Pod "volume-prep-provisioning-6651": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022002177s
Aug  8 10:28:01.976: INFO: Pod "volume-prep-provisioning-6651": Phase="Pending", Reason="", readiness=false. Elapsed: 8.025876482s
Aug  8 10:28:03.980: INFO: Pod "volume-prep-provisioning-6651": Phase="Pending", Reason="", readiness=false. Elapsed: 10.029754834s
... skipping 32 lines ...
Aug  8 10:29:10.155: INFO: Pod "volume-prep-provisioning-6651": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.204483189s
Aug  8 10:29:12.160: INFO: Pod "volume-prep-provisioning-6651": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.209238385s
Aug  8 10:29:14.164: INFO: Pod "volume-prep-provisioning-6651": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.213680955s
Aug  8 10:29:16.169: INFO: Pod "volume-prep-provisioning-6651": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.218500444s
Aug  8 10:29:18.174: INFO: Pod "volume-prep-provisioning-6651": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m24.223238772s
STEP: Saw pod success
Aug  8 10:29:18.174: INFO: Pod "volume-prep-provisioning-6651" satisfied condition "Succeeded or Failed"
Aug  8 10:29:18.174: INFO: Deleting pod "volume-prep-provisioning-6651" in namespace "provisioning-6651"
Aug  8 10:29:18.189: INFO: Wait up to 5m0s for pod "volume-prep-provisioning-6651" to be fully deleted
STEP: Creating pod pod-subpath-test-dynamicpv-8sbm
STEP: Checking for subpath error in container status
Aug  8 10:30:48.211: INFO: Deleting pod "pod-subpath-test-dynamicpv-8sbm" in namespace "provisioning-6651"
Aug  8 10:30:48.219: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-8sbm" to be fully deleted
STEP: Deleting pod
Aug  8 10:30:48.223: INFO: Deleting pod "pod-subpath-test-dynamicpv-8sbm" in namespace "provisioning-6651"
STEP: Deleting pvc
Aug  8 10:30:48.225: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.iospwct"
... skipping 12 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should verify container cannot write to subpath readonly volumes [Slow]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:422
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]","total":-1,"completed":1,"skipped":20,"failed":0}
Aug  8 10:30:53.252: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral 
  should create read-only inline ephemeral volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
... skipping 29 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should create read-only inline ephemeral volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":1,"skipped":214,"failed":0}
Aug  8 10:30:55.261: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral 
  should support multiple inline ephemeral volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:211
... skipping 40 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support multiple inline ephemeral volumes
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:211
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":1,"skipped":55,"failed":0}
Aug  8 10:30:56.561: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral 
  should create read/write inline ephemeral volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:161
... skipping 38 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should create read/write inline ephemeral volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:161
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":1,"skipped":146,"failed":0}
Aug  8 10:30:57.758: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if subpath file is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  8 10:27:54.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath file is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256
Aug  8 10:27:57.061: INFO: Creating resource for dynamic PV
Aug  8 10:27:57.061: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(hostpath.csi.k8s.io) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-6607-e2e-scc7nw8
STEP: creating a claim
Aug  8 10:27:57.064: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 10:27:57.071: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io2cpkx] to have phase Bound
Aug  8 10:27:57.074: INFO: PersistentVolumeClaim hostpath.csi.k8s.io2cpkx found but phase is Pending instead of Bound.
Aug  8 10:27:59.086: INFO: PersistentVolumeClaim hostpath.csi.k8s.io2cpkx found but phase is Pending instead of Bound.
Aug  8 10:28:01.089: INFO: PersistentVolumeClaim hostpath.csi.k8s.io2cpkx found and phase=Bound (4.017796274s)
STEP: Creating pod pod-subpath-test-dynamicpv-6p9f
STEP: Checking for subpath error in container status
Aug  8 10:29:55.110: INFO: Deleting pod "pod-subpath-test-dynamicpv-6p9f" in namespace "provisioning-6607"
Aug  8 10:29:55.115: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-6p9f" to be fully deleted
STEP: Deleting pod
Aug  8 10:30:53.122: INFO: Deleting pod "pod-subpath-test-dynamicpv-6p9f" in namespace "provisioning-6607"
STEP: Deleting pvc
Aug  8 10:30:53.125: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.io2cpkx"
... skipping 9 lines ...

• [SLOW TEST:183.870 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath file is outside the volume [Slow][LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]","total":-1,"completed":1,"skipped":161,"failed":0}
Aug  8 10:30:58.154: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  8 10:28:54.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278
Aug  8 10:28:54.859: INFO: Creating resource for dynamic PV
Aug  8 10:28:54.859: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(hostpath.csi.k8s.io) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-4861-e2e-scpd2rf
STEP: creating a claim
Aug  8 10:28:54.862: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 10:28:54.868: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io7gtck] to have phase Bound
Aug  8 10:28:54.873: INFO: PersistentVolumeClaim hostpath.csi.k8s.io7gtck found but phase is Pending instead of Bound.
Aug  8 10:28:56.878: INFO: PersistentVolumeClaim hostpath.csi.k8s.io7gtck found and phase=Bound (2.009129844s)
STEP: Creating pod pod-subpath-test-dynamicpv-lxmz
STEP: Checking for subpath error in container status
Aug  8 10:30:42.896: INFO: Deleting pod "pod-subpath-test-dynamicpv-lxmz" in namespace "provisioning-4861"
Aug  8 10:30:42.900: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-lxmz" to be fully deleted
STEP: Deleting pod
Aug  8 10:30:54.907: INFO: Deleting pod "pod-subpath-test-dynamicpv-lxmz" in namespace "provisioning-4861"
STEP: Deleting pvc
Aug  8 10:30:54.911: INFO: Deleting PersistentVolumeClaim "hostpath.csi.k8s.io7gtck"
... skipping 9 lines ...

• [SLOW TEST:125.119 seconds]
External Storage [Driver: hostpath.csi.k8s.io]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]","total":-1,"completed":2,"skipped":71,"failed":0}
Aug  8 10:30:59.944: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand 
  should resize volume when PVC is edited while pod is using it
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
... skipping 43 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should resize volume when PVC is edited while pod is using it
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":1,"skipped":86,"failed":0}
Aug  8 10:31:01.985: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode 
  should not mount / map unused volumes in a pod [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
... skipping 51 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should not mount / map unused volumes in a pod [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":181,"failed":0}
Aug  8 10:31:12.119: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] volumeIO 
  should write files of various sizes, verify size, validate content [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_io.go:146
... skipping 42 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] volumeIO
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should write files of various sizes, verify size, validate content [Slow][LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_io.go:146
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow][LinuxOnly]","total":-1,"completed":1,"skipped":135,"failed":0}
Aug  8 10:31:15.898: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] 
  should concurrently access the single volume from pods on the same node [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:312
... skipping 97 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should concurrently access the single volume from pods on the same node [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:312
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node [LinuxOnly]","total":-1,"completed":1,"skipped":196,"failed":0}
Aug  8 10:31:19.569: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] 
  should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:134
... skipping 122 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:134
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]","total":-1,"completed":1,"skipped":90,"failed":0}
Aug  8 10:31:21.244: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] 
  should concurrently access the single volume from pods on the same node [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:312
... skipping 96 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should concurrently access the single volume from pods on the same node [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:312
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node [LinuxOnly]","total":-1,"completed":1,"skipped":90,"failed":0}
Aug  8 10:31:34.109: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] 
  should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:134
... skipping 123 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:134
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]","total":-1,"completed":1,"skipped":4,"failed":0}
Aug  8 10:31:38.081: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand 
  Verify if offline PVC expansion works
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
... skipping 52 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    Verify if offline PVC expansion works
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":1,"skipped":24,"failed":0}
Aug  8 10:31:39.233: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral 
  should support two pods which share the same volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
... skipping 33 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support two pods which share the same volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which share the same volume","total":-1,"completed":1,"skipped":248,"failed":0}
Aug  8 10:31:39.768: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should support existing directories when readOnly specified in the volumeSource
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
... skipping 15 lines ...
Aug  8 10:29:42.413: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 10:29:42.419: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io4bp4v] to have phase Bound
Aug  8 10:29:42.423: INFO: PersistentVolumeClaim hostpath.csi.k8s.io4bp4v found but phase is Pending instead of Bound.
Aug  8 10:29:44.427: INFO: PersistentVolumeClaim hostpath.csi.k8s.io4bp4v found and phase=Bound (2.007726864s)
STEP: Creating pod pod-subpath-test-dynamicpv-tmrw
STEP: Creating a pod to test subpath
Aug  8 10:29:44.437: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-tmrw" in namespace "provisioning-9885" to be "Succeeded or Failed"
Aug  8 10:29:44.440: INFO: Pod "pod-subpath-test-dynamicpv-tmrw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.704357ms
Aug  8 10:29:46.444: INFO: Pod "pod-subpath-test-dynamicpv-tmrw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006784109s
Aug  8 10:29:48.448: INFO: Pod "pod-subpath-test-dynamicpv-tmrw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01078458s
Aug  8 10:29:50.453: INFO: Pod "pod-subpath-test-dynamicpv-tmrw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016054433s
Aug  8 10:29:52.459: INFO: Pod "pod-subpath-test-dynamicpv-tmrw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021906381s
Aug  8 10:29:54.465: INFO: Pod "pod-subpath-test-dynamicpv-tmrw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.028050584s
... skipping 27 lines ...
Aug  8 10:30:50.582: INFO: Pod "pod-subpath-test-dynamicpv-tmrw": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.145028883s
Aug  8 10:30:52.585: INFO: Pod "pod-subpath-test-dynamicpv-tmrw": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.148383553s
Aug  8 10:30:54.590: INFO: Pod "pod-subpath-test-dynamicpv-tmrw": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.152781309s
Aug  8 10:30:56.593: INFO: Pod "pod-subpath-test-dynamicpv-tmrw": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.156235012s
Aug  8 10:30:58.598: INFO: Pod "pod-subpath-test-dynamicpv-tmrw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m14.160631622s
STEP: Saw pod success
Aug  8 10:30:58.598: INFO: Pod "pod-subpath-test-dynamicpv-tmrw" satisfied condition "Succeeded or Failed"
Aug  8 10:30:58.600: INFO: Trying to get logs from node csi-prow-worker pod pod-subpath-test-dynamicpv-tmrw container test-container-subpath-dynamicpv-tmrw: <nil>
STEP: delete the pod
Aug  8 10:30:58.616: INFO: Waiting for pod pod-subpath-test-dynamicpv-tmrw to disappear
Aug  8 10:30:58.620: INFO: Pod pod-subpath-test-dynamicpv-tmrw no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-tmrw
Aug  8 10:30:58.620: INFO: Deleting pod "pod-subpath-test-dynamicpv-tmrw" in namespace "provisioning-9885"
STEP: Creating pod pod-subpath-test-dynamicpv-tmrw
STEP: Creating a pod to test subpath
Aug  8 10:30:58.630: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-tmrw" in namespace "provisioning-9885" to be "Succeeded or Failed"
Aug  8 10:30:58.634: INFO: Pod "pod-subpath-test-dynamicpv-tmrw": Phase="Pending", Reason="", readiness=false. Elapsed: 3.243139ms
Aug  8 10:31:00.640: INFO: Pod "pod-subpath-test-dynamicpv-tmrw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009666018s
Aug  8 10:31:02.644: INFO: Pod "pod-subpath-test-dynamicpv-tmrw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013316091s
Aug  8 10:31:04.649: INFO: Pod "pod-subpath-test-dynamicpv-tmrw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018498354s
Aug  8 10:31:06.657: INFO: Pod "pod-subpath-test-dynamicpv-tmrw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.02671429s
Aug  8 10:31:08.662: INFO: Pod "pod-subpath-test-dynamicpv-tmrw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.031121193s
... skipping 9 lines ...
Aug  8 10:31:28.703: INFO: Pod "pod-subpath-test-dynamicpv-tmrw": Phase="Pending", Reason="", readiness=false. Elapsed: 30.07207188s
Aug  8 10:31:30.707: INFO: Pod "pod-subpath-test-dynamicpv-tmrw": Phase="Pending", Reason="", readiness=false. Elapsed: 32.07593604s
Aug  8 10:31:32.713: INFO: Pod "pod-subpath-test-dynamicpv-tmrw": Phase="Pending", Reason="", readiness=false. Elapsed: 34.082283293s
Aug  8 10:31:34.717: INFO: Pod "pod-subpath-test-dynamicpv-tmrw": Phase="Pending", Reason="", readiness=false. Elapsed: 36.086322008s
Aug  8 10:31:36.722: INFO: Pod "pod-subpath-test-dynamicpv-tmrw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.090820611s
STEP: Saw pod success
Aug  8 10:31:36.722: INFO: Pod "pod-subpath-test-dynamicpv-tmrw" satisfied condition "Succeeded or Failed"
Aug  8 10:31:36.724: INFO: Trying to get logs from node csi-prow-worker pod pod-subpath-test-dynamicpv-tmrw container test-container-subpath-dynamicpv-tmrw: <nil>
STEP: delete the pod
Aug  8 10:31:36.739: INFO: Waiting for pod pod-subpath-test-dynamicpv-tmrw to disappear
Aug  8 10:31:36.742: INFO: Pod pod-subpath-test-dynamicpv-tmrw no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-tmrw
Aug  8 10:31:36.742: INFO: Deleting pod "pod-subpath-test-dynamicpv-tmrw" in namespace "provisioning-9885"
... skipping 16 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support existing directories when readOnly specified in the volumeSource
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":2,"skipped":438,"failed":0}
Aug  8 10:31:41.774: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] volumeMode 
  should not mount / map unused volumes in a pod [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
... skipping 49 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] volumeMode
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should not mount / map unused volumes in a pod [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":128,"failed":0}
Aug  8 10:31:42.407: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral 
  should support two pods which share the same volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
... skipping 45 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support two pods which share the same volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which share the same volume","total":-1,"completed":1,"skipped":75,"failed":0}
Aug  8 10:31:43.038: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] volumes 
  should store data
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
... skipping 91 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should store data
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":2,"skipped":112,"failed":0}
Aug  8 10:31:49.229: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand 
  Verify if offline PVC expansion works
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
... skipping 52 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    Verify if offline PVC expansion works
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":1,"skipped":183,"failed":0}
Aug  8 10:31:49.483: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] 
  should concurrently access the single read-only volume from pods on the same node
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:337
... skipping 59 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should concurrently access the single read-only volume from pods on the same node
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:337
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":-1,"completed":1,"skipped":51,"failed":0}
Aug  8 10:31:49.957: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] volumes 
  should store data
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
... skipping 96 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should store data
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":2,"skipped":23,"failed":0}
Aug  8 10:32:00.870: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] 
  should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:214
... skipping 121 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:214
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]","total":-1,"completed":2,"skipped":123,"failed":0}
Aug  8 10:32:04.869: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] 
  should concurrently access the single read-only volume from pods on the same node
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:337
... skipping 63 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should concurrently access the single read-only volume from pods on the same node
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:337
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":-1,"completed":1,"skipped":219,"failed":0}
Aug  8 10:32:11.010: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should support restarting containers using file as subpath [Slow][LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:335
... skipping 38 lines ...
Aug  8 10:30:06.895: INFO: stderr: ""
Aug  8 10:30:06.895: INFO: stdout: ""
Aug  8 10:30:06.895: INFO: Pod exec output: 
STEP: Waiting for container to stop restarting
Aug  8 10:30:34.902: INFO: Container has restart count: 3
Aug  8 10:31:08.902: INFO: Container has restart count: 4
Aug  8 10:32:06.906: FAIL: while waiting for container to stabilize
Unexpected error:
    <*errors.errorString | 0xc000248250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 37 lines ...
Aug  8 10:32:19.942: INFO: At 2022-08-08 10:28:09 +0000 UTC - event for pod-subpath-test-dynamicpv-2lbx: {kubelet csi-prow-worker} Started: Started container init-volume-dynamicpv-2lbx
Aug  8 10:32:19.942: INFO: At 2022-08-08 10:28:09 +0000 UTC - event for pod-subpath-test-dynamicpv-2lbx: {kubelet csi-prow-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine
Aug  8 10:32:19.942: INFO: At 2022-08-08 10:28:10 +0000 UTC - event for pod-subpath-test-dynamicpv-2lbx: {kubelet csi-prow-worker} Started: Started container test-container-subpath-dynamicpv-2lbx
Aug  8 10:32:19.942: INFO: At 2022-08-08 10:28:10 +0000 UTC - event for pod-subpath-test-dynamicpv-2lbx: {kubelet csi-prow-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine
Aug  8 10:32:19.942: INFO: At 2022-08-08 10:28:10 +0000 UTC - event for pod-subpath-test-dynamicpv-2lbx: {kubelet csi-prow-worker} Created: Created container test-container-volume-dynamicpv-2lbx
Aug  8 10:32:19.942: INFO: At 2022-08-08 10:28:10 +0000 UTC - event for pod-subpath-test-dynamicpv-2lbx: {kubelet csi-prow-worker} Started: Started container test-container-volume-dynamicpv-2lbx
Aug  8 10:32:19.942: INFO: At 2022-08-08 10:29:17 +0000 UTC - event for pod-subpath-test-dynamicpv-2lbx: {kubelet csi-prow-worker} Unhealthy: Liveness probe failed: cat: can't open '/probe-volume/probe-file': No such file or directory

Aug  8 10:32:19.942: INFO: At 2022-08-08 10:29:18 +0000 UTC - event for pod-subpath-test-dynamicpv-2lbx: {kubelet csi-prow-worker} Killing: Container test-container-subpath-dynamicpv-2lbx failed liveness probe, will be restarted
Aug  8 10:32:19.942: INFO: At 2022-08-08 10:29:28 +0000 UTC - event for pod-subpath-test-dynamicpv-2lbx: {kubelet csi-prow-worker} BackOff: Back-off restarting failed container
Aug  8 10:32:19.945: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Aug  8 10:32:19.945: INFO: 
Aug  8 10:32:19.949: INFO: 
Logging node info for node csi-prow-control-plane
Aug  8 10:32:19.953: INFO: Node Info: &Node{ObjectMeta:{csi-prow-control-plane    cf21bd7d-7540-4e89-9a14-9e9efa49835b 3956 0 2022-08-08 10:19:09 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:csi-prow-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-08-08 10:19:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-08-08 10:19:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-08-08 10:19:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/csi-prow/csi-prow-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259975987200 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67445997568 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259975987200 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67445997568 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-08-08 10:29:40 +0000 UTC,LastTransitionTime:2022-08-08 10:19:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-08-08 10:29:40 +0000 UTC,LastTransitionTime:2022-08-08 10:19:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-08-08 10:29:40 +0000 UTC,LastTransitionTime:2022-08-08 10:19:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-08-08 10:29:40 +0000 UTC,LastTransitionTime:2022-08-08 10:19:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:csi-prow-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:44d405be7214483ebec41c7f55c6767b,SystemUUID:17130b43-4ec0-41b5-9527-f0256d69e058,BootID:701a1473-9de4-4df3-b0d7-744b1df32a2f,KernelVersion:5.4.0-1068-gke,OSImage:Ubuntu 21.04,ContainerRuntimeVersion:containerd://1.5.2,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:132714699,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:126834637,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:121042741,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20210326-1e038dc5],SizeBytes:53960776,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:51865396,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:21086532,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:13367922,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:12945155,},ContainerImage{Names:[k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Aug  8 10:32:19.953: INFO: 
... skipping 84 lines ...
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support restarting containers using file as subpath [Slow][LinuxOnly] [It]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:335

    Aug  8 10:32:06.906: while waiting for container to stabilize
    Unexpected error:
        <*errors.errorString | 0xc000248250>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:870
------------------------------
{"msg":"FAILED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]","total":-1,"completed":0,"skipped":34,"failed":1,"failures":["External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]"]}
Aug  8 10:32:20.447: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral 
  should support two pods which share the same volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
... skipping 45 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support two pods which share the same volume
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which share the same volume","total":-1,"completed":1,"skipped":7,"failed":0}
Aug  8 10:32:24.528: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath 
  should support restarting containers using directory as subpath [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:320
... skipping 62 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support restarting containers using directory as subpath [Slow]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:320
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]","total":-1,"completed":1,"skipped":332,"failed":0}
Aug  8 10:32:39.499: INFO: Running AfterSuite actions on all nodes


{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":1,"skipped":263,"failed":0}
Aug  8 10:30:32.947: INFO: Running AfterSuite actions on all nodes
Aug  8 10:32:39.533: INFO: Running AfterSuite actions on node 1
Aug  8 10:32:39.533: INFO: Dumping logs locally to: /logs/artifacts
Aug  8 10:32:39.533: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec ../../cluster/log-dump/log-dump.sh: no such file or directory



Summarizing 1 Failure:

[Fail] External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] subPath [It] should support restarting containers using file as subpath [Slow][LinuxOnly] 
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:870

Ran 48 of 5976 Specs in 291.439 seconds
FAIL! -- 47 Passed | 1 Failed | 0 Pending | 5928 Skipped


Ginkgo ran 1 suite in 5m12.18763431s
Test Suite Failed
Mon Aug  8 10:32:39 UTC 2022 go1.18 /home/prow/go/src/k8s.io/kubernetes$ go run /home/prow/go/src/github.com/kubernetes-csi/csi-driver-host-path/release-tools/filter-junit.go -t=External.Storage|CSI.mock.volume -o /logs/artifacts/junit_parallel.xml /logs/artifacts/junit_01.xml /logs/artifacts/junit_02.xml /logs/artifacts/junit_03.xml /logs/artifacts/junit_04.xml /logs/artifacts/junit_05.xml /logs/artifacts/junit_06.xml /logs/artifacts/junit_07.xml /logs/artifacts/junit_08.xml /logs/artifacts/junit_09.xml /logs/artifacts/junit_10.xml /logs/artifacts/junit_11.xml /logs/artifacts/junit_12.xml /logs/artifacts/junit_13.xml /logs/artifacts/junit_14.xml /logs/artifacts/junit_15.xml /logs/artifacts/junit_16.xml /logs/artifacts/junit_17.xml /logs/artifacts/junit_18.xml /logs/artifacts/junit_19.xml /logs/artifacts/junit_20.xml /logs/artifacts/junit_21.xml /logs/artifacts/junit_22.xml /logs/artifacts/junit_23.xml /logs/artifacts/junit_24.xml /logs/artifacts/junit_25.xml /logs/artifacts/junit_26.xml /logs/artifacts/junit_27.xml /logs/artifacts/junit_28.xml /logs/artifacts/junit_29.xml /logs/artifacts/junit_30.xml /logs/artifacts/junit_31.xml /logs/artifacts/junit_32.xml /logs/artifacts/junit_33.xml /logs/artifacts/junit_34.xml /logs/artifacts/junit_35.xml /logs/artifacts/junit_36.xml /logs/artifacts/junit_37.xml /logs/artifacts/junit_38.xml /logs/artifacts/junit_39.xml /logs/artifacts/junit_40.xml
go: updates to go.mod needed, disabled by -mod=vendor
	(Go version in go.mod is at least 1.14 and vendor directory exists.)
	to update it:
	go mod tidy
go: updates to go.mod needed, disabled by -mod=vendor
	(Go version in go.mod is at least 1.14 and vendor directory exists.)
	to update it:
	go mod tidy
WARNING: E2E parallel failed
Mon Aug  8 10:32:40 UTC 2022 go1.19 /home/prow/go/src/k8s.io/kubernetes$ env KUBECONFIG=/root/.kube/config KUBE_TEST_REPO_LIST=/home/prow/go/pkg/csiprow.Y65keq55jN/e2e-repo-list ginkgo -v -p -nodes 40 -focus=External.Storage.*(\[Feature:VolumeSnapshotDataSource\]) -skip=\[Serial\]|\[Disruptive\] /home/prow/go/pkg/csiprow.Y65keq55jN/e2e.test -- -report-dir /logs/artifacts -storage.testdriver=/home/prow/go/pkg/csiprow.Y65keq55jN/test-driver.yaml
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1659954760 - Will randomize all specs
Will run 5976 specs

... skipping 403 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] provisioning
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:200
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]","total":-1,"completed":1,"skipped":157,"failed":0}
Aug  8 10:33:53.903: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] provisioning 
  should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:200
... skipping 105 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] provisioning
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:200
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]","total":-1,"completed":1,"skipped":191,"failed":0}
Aug  8 10:34:03.196: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  
  should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
... skipping 16 lines ...
STEP: creating a claim
Aug  8 10:33:02.440: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 10:33:02.562: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iodq9hb] to have phase Bound
Aug  8 10:33:02.691: INFO: PersistentVolumeClaim hostpath.csi.k8s.iodq9hb found but phase is Pending instead of Bound.
Aug  8 10:33:04.696: INFO: PersistentVolumeClaim hostpath.csi.k8s.iodq9hb found and phase=Bound (2.134034455s)
STEP: [init] starting a pod to use the claim
Aug  8 10:33:04.707: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-jxkk9" in namespace "snapshotting-3900" to be "Succeeded or Failed"
Aug  8 10:33:04.710: INFO: Pod "pvc-snapshottable-tester-jxkk9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.46364ms
Aug  8 10:33:06.715: INFO: Pod "pvc-snapshottable-tester-jxkk9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008377151s
Aug  8 10:33:08.718: INFO: Pod "pvc-snapshottable-tester-jxkk9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011632085s
Aug  8 10:33:10.723: INFO: Pod "pvc-snapshottable-tester-jxkk9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016355821s
Aug  8 10:33:12.727: INFO: Pod "pvc-snapshottable-tester-jxkk9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.02076985s
Aug  8 10:33:14.733: INFO: Pod "pvc-snapshottable-tester-jxkk9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.026135768s
Aug  8 10:33:16.737: INFO: Pod "pvc-snapshottable-tester-jxkk9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.030295541s
Aug  8 10:33:18.740: INFO: Pod "pvc-snapshottable-tester-jxkk9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.033913938s
Aug  8 10:33:20.745: INFO: Pod "pvc-snapshottable-tester-jxkk9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.038192571s
STEP: Saw pod success
Aug  8 10:33:20.745: INFO: Pod "pvc-snapshottable-tester-jxkk9" satisfied condition "Succeeded or Failed"
Aug  8 10:33:20.752: INFO: Pod pvc-snapshottable-tester-jxkk9 has the following logs: 
Aug  8 10:33:20.752: INFO: Deleting pod "pvc-snapshottable-tester-jxkk9" in namespace "snapshotting-3900"
Aug  8 10:33:20.760: INFO: Wait up to 5m0s for pod "pvc-snapshottable-tester-jxkk9" to be fully deleted
Aug  8 10:33:20.762: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iodq9hb] to have phase Bound
Aug  8 10:33:20.765: INFO: PersistentVolumeClaim hostpath.csi.k8s.iodq9hb found and phase=Bound (2.800349ms)
STEP: [init] checking the claim
... skipping 11 lines ...
[It] should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
STEP: checking the snapshot
STEP: checking the SnapshotContent
STEP: Modifying source data test
STEP: modifying the data in the source PVC
Aug  8 10:33:22.812: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-bnh8g" in namespace "snapshotting-3900" to be "Succeeded or Failed"
Aug  8 10:33:22.816: INFO: Pod "pvc-snapshottable-data-tester-bnh8g": Phase="Pending", Reason="", readiness=false. Elapsed: 3.023478ms
Aug  8 10:33:24.821: INFO: Pod "pvc-snapshottable-data-tester-bnh8g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008118014s
Aug  8 10:33:26.826: INFO: Pod "pvc-snapshottable-data-tester-bnh8g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012579105s
Aug  8 10:33:28.830: INFO: Pod "pvc-snapshottable-data-tester-bnh8g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017139564s
Aug  8 10:33:30.834: INFO: Pod "pvc-snapshottable-data-tester-bnh8g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020948362s
Aug  8 10:33:32.842: INFO: Pod "pvc-snapshottable-data-tester-bnh8g": Phase="Pending", Reason="", readiness=false. Elapsed: 10.029158169s
Aug  8 10:33:34.863: INFO: Pod "pvc-snapshottable-data-tester-bnh8g": Phase="Pending", Reason="", readiness=false. Elapsed: 12.04991348s
Aug  8 10:33:36.869: INFO: Pod "pvc-snapshottable-data-tester-bnh8g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.056129296s
STEP: Saw pod success
Aug  8 10:33:36.869: INFO: Pod "pvc-snapshottable-data-tester-bnh8g" satisfied condition "Succeeded or Failed"
Aug  8 10:33:36.882: INFO: Pod pvc-snapshottable-data-tester-bnh8g has the following logs: 
Aug  8 10:33:36.882: INFO: Deleting pod "pvc-snapshottable-data-tester-bnh8g" in namespace "snapshotting-3900"
Aug  8 10:33:36.904: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-bnh8g" to be fully deleted
STEP: creating a pvc from the snapshot
STEP: starting a pod to use the claim
Aug  8 10:33:58.967: INFO: Running '/usr/local/bin/kubectl --server=https://127.0.0.1:43165 --kubeconfig=/root/.kube/config --namespace=snapshotting-3900 exec restored-pvc-tester-ghd2w --namespace=snapshotting-3900 -- cat /mnt/test/data'
... skipping 42 lines ...
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:108
      
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:225
        should check snapshot fields, check restore correctly works after modifying source data, check deletion
        /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  should check snapshot fields, check restore correctly works after modifying source data, check deletion","total":-1,"completed":1,"skipped":148,"failed":0}
Aug  8 10:34:38.252: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  
  should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
... skipping 16 lines ...
STEP: creating a claim
Aug  8 10:33:02.671: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 10:33:02.815: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io2lmb4] to have phase Bound
Aug  8 10:33:02.853: INFO: PersistentVolumeClaim hostpath.csi.k8s.io2lmb4 found but phase is Pending instead of Bound.
Aug  8 10:33:04.857: INFO: PersistentVolumeClaim hostpath.csi.k8s.io2lmb4 found and phase=Bound (2.042476952s)
STEP: [init] starting a pod to use the claim
Aug  8 10:33:04.871: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-lmfdk" in namespace "snapshotting-73" to be "Succeeded or Failed"
Aug  8 10:33:04.874: INFO: Pod "pvc-snapshottable-tester-lmfdk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.798911ms
Aug  8 10:33:06.877: INFO: Pod "pvc-snapshottable-tester-lmfdk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006251712s
Aug  8 10:33:08.888: INFO: Pod "pvc-snapshottable-tester-lmfdk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017248092s
Aug  8 10:33:10.892: INFO: Pod "pvc-snapshottable-tester-lmfdk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020521362s
Aug  8 10:33:12.896: INFO: Pod "pvc-snapshottable-tester-lmfdk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.024655666s
STEP: Saw pod success
Aug  8 10:33:12.896: INFO: Pod "pvc-snapshottable-tester-lmfdk" satisfied condition "Succeeded or Failed"
Aug  8 10:33:12.909: INFO: Pod pvc-snapshottable-tester-lmfdk has the following logs: 
Aug  8 10:33:12.909: INFO: Deleting pod "pvc-snapshottable-tester-lmfdk" in namespace "snapshotting-73"
Aug  8 10:33:12.918: INFO: Wait up to 5m0s for pod "pvc-snapshottable-tester-lmfdk" to be fully deleted
Aug  8 10:33:12.920: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io2lmb4] to have phase Bound
Aug  8 10:33:12.922: INFO: PersistentVolumeClaim hostpath.csi.k8s.io2lmb4 found and phase=Bound (2.187293ms)
STEP: [init] checking the claim
... skipping 31 lines ...
[It] should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
STEP: checking the snapshot
STEP: checking the SnapshotContent
STEP: Modifying source data test
STEP: modifying the data in the source PVC
Aug  8 10:33:19.038: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-zxhvj" in namespace "snapshotting-73" to be "Succeeded or Failed"
Aug  8 10:33:19.045: INFO: Pod "pvc-snapshottable-data-tester-zxhvj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.463527ms
Aug  8 10:33:21.049: INFO: Pod "pvc-snapshottable-data-tester-zxhvj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010335691s
Aug  8 10:33:23.053: INFO: Pod "pvc-snapshottable-data-tester-zxhvj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01458462s
Aug  8 10:33:25.057: INFO: Pod "pvc-snapshottable-data-tester-zxhvj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018671554s
Aug  8 10:33:27.062: INFO: Pod "pvc-snapshottable-data-tester-zxhvj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023344882s
Aug  8 10:33:29.066: INFO: Pod "pvc-snapshottable-data-tester-zxhvj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.027344459s
Aug  8 10:33:31.070: INFO: Pod "pvc-snapshottable-data-tester-zxhvj": Phase="Pending", Reason="", readiness=false. Elapsed: 12.031006877s
Aug  8 10:33:33.076: INFO: Pod "pvc-snapshottable-data-tester-zxhvj": Phase="Pending", Reason="", readiness=false. Elapsed: 14.037346495s
Aug  8 10:33:35.098: INFO: Pod "pvc-snapshottable-data-tester-zxhvj": Phase="Pending", Reason="", readiness=false. Elapsed: 16.059259407s
Aug  8 10:33:37.104: INFO: Pod "pvc-snapshottable-data-tester-zxhvj": Phase="Pending", Reason="", readiness=false. Elapsed: 18.065546971s
Aug  8 10:33:39.113: INFO: Pod "pvc-snapshottable-data-tester-zxhvj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.074526003s
STEP: Saw pod success
Aug  8 10:33:39.113: INFO: Pod "pvc-snapshottable-data-tester-zxhvj" satisfied condition "Succeeded or Failed"
Aug  8 10:33:39.128: INFO: Pod pvc-snapshottable-data-tester-zxhvj has the following logs: 
Aug  8 10:33:39.129: INFO: Deleting pod "pvc-snapshottable-data-tester-zxhvj" in namespace "snapshotting-73"
Aug  8 10:33:39.138: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-zxhvj" to be fully deleted
STEP: creating a pvc from the snapshot
STEP: starting a pod to use the claim
Aug  8 10:33:51.166: INFO: Running '/usr/local/bin/kubectl --server=https://127.0.0.1:43165 --kubeconfig=/root/.kube/config --namespace=snapshotting-73 exec restored-pvc-tester-zwd7k --namespace=snapshotting-73 -- cat /mnt/test/data'
... skipping 42 lines ...
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:108
      
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:225
        should check snapshot fields, check restore correctly works after modifying source data, check deletion
        /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  should check snapshot fields, check restore correctly works after modifying source data, check deletion","total":-1,"completed":1,"skipped":99,"failed":0}
Aug  8 10:34:40.444: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  
  should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
... skipping 16 lines ...
STEP: creating a claim
Aug  8 10:33:02.041: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  8 10:33:02.126: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io46sgg] to have phase Bound
Aug  8 10:33:02.309: INFO: PersistentVolumeClaim hostpath.csi.k8s.io46sgg found but phase is Pending instead of Bound.
Aug  8 10:33:04.314: INFO: PersistentVolumeClaim hostpath.csi.k8s.io46sgg found and phase=Bound (2.188521265s)
STEP: [init] starting a pod to use the claim
Aug  8 10:33:04.326: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-m4wsw" in namespace "snapshotting-8598" to be "Succeeded or Failed"
Aug  8 10:33:04.334: INFO: Pod "pvc-snapshottable-tester-m4wsw": Phase="Pending", Reason="", readiness=false. Elapsed: 7.392328ms
Aug  8 10:33:06.338: INFO: Pod "pvc-snapshottable-tester-m4wsw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011212948s
Aug  8 10:33:08.344: INFO: Pod "pvc-snapshottable-tester-m4wsw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016752883s
Aug  8 10:33:10.347: INFO: Pod "pvc-snapshottable-tester-m4wsw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020210455s
Aug  8 10:33:12.351: INFO: Pod "pvc-snapshottable-tester-m4wsw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023740547s
Aug  8 10:33:14.356: INFO: Pod "pvc-snapshottable-tester-m4wsw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.028843184s
Aug  8 10:33:16.360: INFO: Pod "pvc-snapshottable-tester-m4wsw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.033458601s
Aug  8 10:33:18.364: INFO: Pod "pvc-snapshottable-tester-m4wsw": Phase="Running", Reason="", readiness=true. Elapsed: 14.037700044s
Aug  8 10:33:20.368: INFO: Pod "pvc-snapshottable-tester-m4wsw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.041366339s
STEP: Saw pod success
Aug  8 10:33:20.368: INFO: Pod "pvc-snapshottable-tester-m4wsw" satisfied condition "Succeeded or Failed"
Aug  8 10:33:20.376: INFO: Pod pvc-snapshottable-tester-m4wsw has the following logs: 
Aug  8 10:33:20.376: INFO: Deleting pod "pvc-snapshottable-tester-m4wsw" in namespace "snapshotting-8598"
Aug  8 10:33:20.383: INFO: Wait up to 5m0s for pod "pvc-snapshottable-tester-m4wsw" to be fully deleted
Aug  8 10:33:20.386: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.io46sgg] to have phase Bound
Aug  8 10:33:20.388: INFO: PersistentVolumeClaim hostpath.csi.k8s.io46sgg found and phase=Bound (2.428248ms)
STEP: [init] checking the claim
... skipping 31 lines ...
[It] should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
STEP: checking the snapshot
STEP: checking the SnapshotContent
STEP: Modifying source data test
STEP: modifying the data in the source PVC
Aug  8 10:33:26.501: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-6ptlg" in namespace "snapshotting-8598" to be "Succeeded or Failed"
Aug  8 10:33:26.505: INFO: Pod "pvc-snapshottable-data-tester-6ptlg": Phase="Pending", Reason="", readiness=false. Elapsed: 3.123868ms
Aug  8 10:33:28.509: INFO: Pod "pvc-snapshottable-data-tester-6ptlg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007758226s
Aug  8 10:33:30.514: INFO: Pod "pvc-snapshottable-data-tester-6ptlg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012528914s
Aug  8 10:33:32.518: INFO: Pod "pvc-snapshottable-data-tester-6ptlg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016465363s
Aug  8 10:33:34.540: INFO: Pod "pvc-snapshottable-data-tester-6ptlg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.03812016s
Aug  8 10:33:36.559: INFO: Pod "pvc-snapshottable-data-tester-6ptlg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.05695489s
STEP: Saw pod success
Aug  8 10:33:36.559: INFO: Pod "pvc-snapshottable-data-tester-6ptlg" satisfied condition "Succeeded or Failed"
Aug  8 10:33:36.592: INFO: Pod pvc-snapshottable-data-tester-6ptlg has the following logs: 
Aug  8 10:33:36.592: INFO: Deleting pod "pvc-snapshottable-data-tester-6ptlg" in namespace "snapshotting-8598"
Aug  8 10:33:36.611: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-6ptlg" to be fully deleted
STEP: creating a pvc from the snapshot
STEP: starting a pod to use the claim
Aug  8 10:33:44.668: INFO: Running '/usr/local/bin/kubectl --server=https://127.0.0.1:43165 --kubeconfig=/root/.kube/config --namespace=snapshotting-8598 exec restored-pvc-tester-c8fcg --namespace=snapshotting-8598 -- cat /mnt/test/data'
... skipping 33 lines ...
Aug  8 10:34:08.962: INFO: volumesnapshotcontents pre-provisioned-snapcontent-7e25cf2c-b592-4acd-bfa1-b88915bce806 has been found and is not deleted
Aug  8 10:34:09.966: INFO: volumesnapshotcontents pre-provisioned-snapcontent-7e25cf2c-b592-4acd-bfa1-b88915bce806 has been found and is not deleted
Aug  8 10:34:10.969: INFO: volumesnapshotcontents pre-provisioned-snapcontent-7e25cf2c-b592-4acd-bfa1-b88915bce806 has been found and is not deleted
Aug  8 10:34:11.974: INFO: volumesnapshotcontents pre-provisioned-snapcontent-7e25cf2c-b592-4acd-bfa1-b88915bce806 has been found and is not deleted
Aug  8 10:34:12.978: INFO: volumesnapshotcontents pre-provisioned-snapcontent-7e25cf2c-b592-4acd-bfa1-b88915bce806 has been found and is not deleted
Aug  8 10:34:13.982: INFO: volumesnapshotcontents pre-provisioned-snapcontent-7e25cf2c-b592-4acd-bfa1-b88915bce806 has been found and is not deleted
Aug  8 10:34:14.982: INFO: WaitUntil failed after reaching the timeout 30s
[AfterEach] volume snapshot controller
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:221
Aug  8 10:34:14.989: INFO: Pod restored-pvc-tester-c8fcg has the following logs: 
Aug  8 10:34:14.989: INFO: Deleting pod "restored-pvc-tester-c8fcg" in namespace "snapshotting-8598"
Aug  8 10:34:14.995: INFO: Wait up to 5m0s for pod "restored-pvc-tester-c8fcg" to be fully deleted
Aug  8 10:34:49.002: INFO: deleting claim "snapshotting-8598"/"pvc-6wfs9"
... skipping 28 lines ...
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:108
      
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:225
        should check snapshot fields, check restore correctly works after modifying source data, check deletion
        /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  should check snapshot fields, check restore correctly works after modifying source data, check deletion","total":-1,"completed":1,"skipped":148,"failed":0}
Aug  8 10:34:56.073: INFO: Running AfterSuite actions on all nodes


External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  
  should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
... skipping 20 lines ...
Aug  8 10:33:04.013: INFO: PersistentVolumeClaim hostpath.csi.k8s.iojcr4x found but phase is Pending instead of Bound.
Aug  8 10:33:06.017: INFO: PersistentVolumeClaim hostpath.csi.k8s.iojcr4x found but phase is Pending instead of Bound.
Aug  8 10:33:08.022: INFO: PersistentVolumeClaim hostpath.csi.k8s.iojcr4x found but phase is Pending instead of Bound.
Aug  8 10:33:10.026: INFO: PersistentVolumeClaim hostpath.csi.k8s.iojcr4x found but phase is Pending instead of Bound.
Aug  8 10:33:12.031: INFO: PersistentVolumeClaim hostpath.csi.k8s.iojcr4x found and phase=Bound (10.089969474s)
STEP: [init] starting a pod to use the claim
Aug  8 10:33:12.044: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-t9dhm" in namespace "snapshotting-4490" to be "Succeeded or Failed"
Aug  8 10:33:12.053: INFO: Pod "pvc-snapshottable-tester-t9dhm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.268441ms
Aug  8 10:33:14.057: INFO: Pod "pvc-snapshottable-tester-t9dhm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012347859s
Aug  8 10:33:16.061: INFO: Pod "pvc-snapshottable-tester-t9dhm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01639787s
Aug  8 10:33:18.065: INFO: Pod "pvc-snapshottable-tester-t9dhm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020933946s
Aug  8 10:33:20.072: INFO: Pod "pvc-snapshottable-tester-t9dhm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.027380258s
Aug  8 10:33:22.076: INFO: Pod "pvc-snapshottable-tester-t9dhm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.032132728s
Aug  8 10:33:24.081: INFO: Pod "pvc-snapshottable-tester-t9dhm": Phase="Pending", Reason="", readiness=false. Elapsed: 12.036416407s
Aug  8 10:33:26.084: INFO: Pod "pvc-snapshottable-tester-t9dhm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.040079543s
STEP: Saw pod success
Aug  8 10:33:26.084: INFO: Pod "pvc-snapshottable-tester-t9dhm" satisfied condition "Succeeded or Failed"
Aug  8 10:33:26.093: INFO: Pod pvc-snapshottable-tester-t9dhm has the following logs: 
Aug  8 10:33:26.093: INFO: Deleting pod "pvc-snapshottable-tester-t9dhm" in namespace "snapshotting-4490"
Aug  8 10:33:26.102: INFO: Wait up to 5m0s for pod "pvc-snapshottable-tester-t9dhm" to be fully deleted
Aug  8 10:33:26.104: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [hostpath.csi.k8s.iojcr4x] to have phase Bound
Aug  8 10:33:26.106: INFO: PersistentVolumeClaim hostpath.csi.k8s.iojcr4x found and phase=Bound (1.99194ms)
STEP: [init] checking the claim
... skipping 12 lines ...
[It] should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
STEP: checking the snapshot
STEP: checking the SnapshotContent
STEP: Modifying source data test
STEP: modifying the data in the source PVC
Aug  8 10:33:30.151: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-ghgkl" in namespace "snapshotting-4490" to be "Succeeded or Failed"
Aug  8 10:33:30.155: INFO: Pod "pvc-snapshottable-data-tester-ghgkl": Phase="Pending", Reason="", readiness=false. Elapsed: 3.700434ms
Aug  8 10:33:32.159: INFO: Pod "pvc-snapshottable-data-tester-ghgkl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008491171s
Aug  8 10:33:34.175: INFO: Pod "pvc-snapshottable-data-tester-ghgkl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023760373s
Aug  8 10:33:36.187: INFO: Pod "pvc-snapshottable-data-tester-ghgkl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035863506s
Aug  8 10:33:38.194: INFO: Pod "pvc-snapshottable-data-tester-ghgkl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043261219s
Aug  8 10:33:40.198: INFO: Pod "pvc-snapshottable-data-tester-ghgkl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.046951583s
Aug  8 10:33:42.203: INFO: Pod "pvc-snapshottable-data-tester-ghgkl": Phase="Pending", Reason="", readiness=false. Elapsed: 12.051861102s
Aug  8 10:33:44.208: INFO: Pod "pvc-snapshottable-data-tester-ghgkl": Phase="Pending", Reason="", readiness=false. Elapsed: 14.056633432s
Aug  8 10:33:46.211: INFO: Pod "pvc-snapshottable-data-tester-ghgkl": Phase="Pending", Reason="", readiness=false. Elapsed: 16.060546677s
Aug  8 10:33:48.216: INFO: Pod "pvc-snapshottable-data-tester-ghgkl": Phase="Pending", Reason="", readiness=false. Elapsed: 18.065136922s
Aug  8 10:33:50.220: INFO: Pod "pvc-snapshottable-data-tester-ghgkl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.069440542s
STEP: Saw pod success
Aug  8 10:33:50.220: INFO: Pod "pvc-snapshottable-data-tester-ghgkl" satisfied condition "Succeeded or Failed"
Aug  8 10:33:50.229: INFO: Pod pvc-snapshottable-data-tester-ghgkl has the following logs: 
Aug  8 10:33:50.229: INFO: Deleting pod "pvc-snapshottable-data-tester-ghgkl" in namespace "snapshotting-4490"
Aug  8 10:33:50.238: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-ghgkl" to be fully deleted
STEP: creating a pvc from the snapshot
STEP: starting a pod to use the claim
Aug  8 10:34:02.264: INFO: Running '/usr/local/bin/kubectl --server=https://127.0.0.1:43165 --kubeconfig=/root/.kube/config --namespace=snapshotting-4490 exec restored-pvc-tester-wrnr9 --namespace=snapshotting-4490 -- cat /mnt/test/data'
... skipping 33 lines ...
Aug  8 10:34:26.573: INFO: volumesnapshotcontents snapcontent-a904233f-ad12-4b4f-803d-81aa8cdc3cd3 has been found and is not deleted
Aug  8 10:34:27.577: INFO: volumesnapshotcontents snapcontent-a904233f-ad12-4b4f-803d-81aa8cdc3cd3 has been found and is not deleted
Aug  8 10:34:28.582: INFO: volumesnapshotcontents snapcontent-a904233f-ad12-4b4f-803d-81aa8cdc3cd3 has been found and is not deleted
Aug  8 10:34:29.586: INFO: volumesnapshotcontents snapcontent-a904233f-ad12-4b4f-803d-81aa8cdc3cd3 has been found and is not deleted
Aug  8 10:34:30.591: INFO: volumesnapshotcontents snapcontent-a904233f-ad12-4b4f-803d-81aa8cdc3cd3 has been found and is not deleted
Aug  8 10:34:31.595: INFO: volumesnapshotcontents snapcontent-a904233f-ad12-4b4f-803d-81aa8cdc3cd3 has been found and is not deleted
Aug  8 10:34:32.595: INFO: WaitUntil failed after reaching the timeout 30s
[AfterEach] volume snapshot controller
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:221
Aug  8 10:34:32.602: INFO: Pod restored-pvc-tester-wrnr9 has the following logs: 
Aug  8 10:34:32.602: INFO: Deleting pod "restored-pvc-tester-wrnr9" in namespace "snapshotting-4490"
Aug  8 10:34:32.607: INFO: Wait up to 5m0s for pod "restored-pvc-tester-wrnr9" to be fully deleted
Aug  8 10:35:14.615: INFO: deleting claim "snapshotting-4490"/"pvc-gnhkk"
... skipping 28 lines ...
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:108
      
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:225
        should check snapshot fields, check restore correctly works after modifying source data, check deletion
        /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
------------------------------
{"msg":"PASSED External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  should check snapshot fields, check restore correctly works after modifying source data, check deletion","total":-1,"completed":1,"skipped":132,"failed":0}
Aug  8 10:35:21.682: INFO: Running AfterSuite actions on all nodes


Aug  8 10:33:03.006: INFO: Running AfterSuite actions on all nodes
Aug  8 10:35:21.717: INFO: Running AfterSuite actions on node 1
Aug  8 10:35:21.717: INFO: Dumping logs locally to: /logs/artifacts
Aug  8 10:35:21.717: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec ../../cluster/log-dump/log-dump.sh: no such file or directory


Ran 6 of 5976 Specs in 145.612 seconds
SUCCESS! -- 6 Passed | 0 Failed | 0 Pending | 5970 Skipped


Ginkgo ran 1 suite in 2m41.3273s
Test Suite Passed
Mon Aug  8 10:35:21 UTC 2022 go1.18 /home/prow/go/src/k8s.io/kubernetes$ go run /home/prow/go/src/github.com/kubernetes-csi/csi-driver-host-path/release-tools/filter-junit.go -t=External.Storage|CSI.mock.volume -o /logs/artifacts/junit_parallel-features.xml /logs/artifacts/junit_01.xml /logs/artifacts/junit_02.xml /logs/artifacts/junit_03.xml /logs/artifacts/junit_04.xml /logs/artifacts/junit_05.xml /logs/artifacts/junit_06.xml /logs/artifacts/junit_07.xml /logs/artifacts/junit_08.xml /logs/artifacts/junit_09.xml /logs/artifacts/junit_10.xml /logs/artifacts/junit_11.xml /logs/artifacts/junit_12.xml /logs/artifacts/junit_13.xml /logs/artifacts/junit_14.xml /logs/artifacts/junit_15.xml /logs/artifacts/junit_16.xml /logs/artifacts/junit_17.xml /logs/artifacts/junit_18.xml /logs/artifacts/junit_19.xml /logs/artifacts/junit_20.xml /logs/artifacts/junit_21.xml /logs/artifacts/junit_22.xml /logs/artifacts/junit_23.xml /logs/artifacts/junit_24.xml /logs/artifacts/junit_25.xml /logs/artifacts/junit_26.xml /logs/artifacts/junit_27.xml /logs/artifacts/junit_28.xml /logs/artifacts/junit_29.xml /logs/artifacts/junit_30.xml /logs/artifacts/junit_31.xml /logs/artifacts/junit_32.xml /logs/artifacts/junit_33.xml /logs/artifacts/junit_34.xml /logs/artifacts/junit_35.xml /logs/artifacts/junit_36.xml /logs/artifacts/junit_37.xml /logs/artifacts/junit_38.xml /logs/artifacts/junit_39.xml /logs/artifacts/junit_40.xml
go: updates to go.mod needed, disabled by -mod=vendor
... skipping 5 lines ...
	to update it:
	go mod tidy
Mon Aug  8 10:35:22 UTC 2022 go1.19 /home/prow/go/src/k8s.io/kubernetes$ env KUBECONFIG=/root/.kube/config KUBE_TEST_REPO_LIST=/home/prow/go/pkg/csiprow.Y65keq55jN/e2e-repo-list ginkgo -v -focus=External.Storage.*(\[Serial\]|\[Disruptive\]) -skip=\[Feature:|Disruptive /home/prow/go/pkg/csiprow.Y65keq55jN/e2e.test -- -report-dir /logs/artifacts -storage.testdriver=/home/prow/go/pkg/csiprow.Y65keq55jN/test-driver.yaml
Aug  8 10:35:24.168: INFO: Driver loaded from path [/home/prow/go/pkg/csiprow.Y65keq55jN/test-driver.yaml]: &{DriverInfo:{Name:hostpath.csi.k8s.io InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max: Min:1Mi} SupportedFsType:map[:{}] SupportedMountOption:map[] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true multipods:true nodeExpansion:true persistence:true singleNodeVolume:true snapshotDataSource:true topology:true] RequiredAccessModes:[] TopologyKeys:[] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:true FromFile: FromExistingClassName:} SnapshotClass:{FromName:true FromFile: FromExistingClassName:} InlineVolumes:[{Attributes:map[] Shared:false ReadOnly:false}] ClientNodeName:csi-prow-worker Timeouts:map[]}
Aug  8 10:35:24.235: INFO: The --provider flag is not set. Continuing as if --provider=skeleton had been used.
I0808 10:35:24.235852  103467 e2e.go:129] Starting e2e run "95fc1e9a-894b-408f-b079-be2de3c1f007" on Ginkgo node 1
{"msg":"Test Suite starting","total":4,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1659954922 - Will randomize all specs
Will run 4 of 5976 specs

Aug  8 10:35:24.301: INFO: >>> kubeConfig: /root/.kube/config
... skipping 113 lines ...

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumelimits.go:126
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSAug  8 10:35:24.472: INFO: Running AfterSuite actions on all nodes
Aug  8 10:35:24.472: INFO: Running AfterSuite actions on node 1
Aug  8 10:35:24.472: INFO: Dumping logs locally to: /logs/artifacts
Aug  8 10:35:24.473: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec ../../cluster/log-dump/log-dump.sh: no such file or directory

JUnit report was created: /logs/artifacts/junit_01.xml
{"msg":"Test Suite completed","total":4,"completed":0,"skipped":5976,"failed":0}

Ran 0 of 5976 Specs in 0.174 seconds
SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 5976 Skipped
PASS

Ginkgo ran 1 suite in 2.114992488s
Test Suite Passed
Mon Aug  8 10:35:24 UTC 2022 go1.18 /home/prow/go/src/k8s.io/kubernetes$ go run /home/prow/go/src/github.com/kubernetes-csi/csi-driver-host-path/release-tools/filter-junit.go -t=External.Storage|CSI.mock.volume -o /logs/artifacts/junit_serial.xml /logs/artifacts/junit_01.xml /logs/artifacts/junit_02.xml /logs/artifacts/junit_03.xml /logs/artifacts/junit_04.xml /logs/artifacts/junit_05.xml /logs/artifacts/junit_06.xml /logs/artifacts/junit_07.xml /logs/artifacts/junit_08.xml /logs/artifacts/junit_09.xml /logs/artifacts/junit_10.xml /logs/artifacts/junit_11.xml /logs/artifacts/junit_12.xml /logs/artifacts/junit_13.xml /logs/artifacts/junit_14.xml /logs/artifacts/junit_15.xml /logs/artifacts/junit_16.xml /logs/artifacts/junit_17.xml /logs/artifacts/junit_18.xml /logs/artifacts/junit_19.xml /logs/artifacts/junit_20.xml /logs/artifacts/junit_21.xml /logs/artifacts/junit_22.xml /logs/artifacts/junit_23.xml /logs/artifacts/junit_24.xml /logs/artifacts/junit_25.xml /logs/artifacts/junit_26.xml /logs/artifacts/junit_27.xml /logs/artifacts/junit_28.xml /logs/artifacts/junit_29.xml /logs/artifacts/junit_30.xml /logs/artifacts/junit_31.xml /logs/artifacts/junit_32.xml /logs/artifacts/junit_33.xml /logs/artifacts/junit_34.xml /logs/artifacts/junit_35.xml /logs/artifacts/junit_36.xml /logs/artifacts/junit_37.xml /logs/artifacts/junit_38.xml /logs/artifacts/junit_39.xml /logs/artifacts/junit_40.xml
go: updates to go.mod needed, disabled by -mod=vendor
... skipping 22 lines ...