This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 25 failed / 991 succeeded
Started2019-03-28 03:02
Elapsed1h41m
Revision
Buildergke-prow-containerd-pool-99179761-9n6z
podd4883349-5105-11e9-ab15-0a580a6c080e
infra-commit17cf3f083
job-versionv1.15.0-alpha.0.1601+312eb890e6cf81
master_os_imagecos-69-10895-138-0
node_os_imagecos-beta-73-11647-64-0
podd4883349-5105-11e9-ab15-0a580a6c080e
revisionv1.15.0-alpha.0.1601+312eb890e6cf81

Test Failures


Test 52m2s

error during ./hack/ginkgo-e2e.sh --ginkgo.flakeAttempts=2 --ginkgo.skip=\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|\[DisabledForLargeClusters\] --minStartupPods=8 --node-schedulable-timeout=90m --logexporter-gcs-path=gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-correctness/1111101121893502980/artifacts --report-dir=/workspace/_artifacts --disable-log-dump=true --cluster-ip-range=10.64.0.0/11: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


[sig-storage] CSI Volumes [Driver: csi-hostpath-v0] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow] 14m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\-v0\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\ssubPath\sshould\sfail\sif\ssubpath\swith\sbackstepping\sis\soutside\sthe\svolume\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:241
Unexpected error:
    <*errors.errorString | 0xc0071692e0>: {
        s: "PersistentVolumeClaims [pvc-xnb6b] not all in phase Bound within 5m0s",
    }
    PersistentVolumeClaims [pvc-xnb6b] not all in phase Bound within 5m0s
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:329
				
				Click to see stdout/stderrfrom junit_22.xml

Filter through log files | View test history on testgrid


[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow] 16m14s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\ssubPath\sshould\sfail\sif\ssubpath\sdirectory\sis\soutside\sthe\svolume\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:208
while waiting for subpath failure
Unexpected error:
    <*errors.errorString | 0xc0002b5400>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:719
				
				Click to see stdout/stderrfrom junit_36.xml

Filter through log files | View test history on testgrid


[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow] 27m26s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\svolumeIO\sshould\swrite\sfiles\sof\svarious\ssizes\,\sverify\ssize\,\svalidate\scontent\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_io.go:120
Unexpected error:
    <*errors.errorString | 0xc007857120>: {
        s: "client pod \"hostpath-io-client\" not running: timed out waiting for the condition",
    }
    client pod "hostpath-io-client" not running: timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_io.go:136
				
				Click to see stdout/stderrfrom junit_33.xml

Find hostpath-io-client mentions in log files | View test history on testgrid


[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume should concurrently access the single volume from pods on the same node 16m17s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\]\s\[Testpattern\:\sDynamic\sPV\s\(filesystem\svolmode\)\]\smultiVolume\sshould\sconcurrently\saccess\sthe\ssingle\svolume\sfrom\spods\son\sthe\ssame\snode$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:265
Unexpected error:
    <*errors.errorString | 0xc0097255a0>: {
        s: "pod \"security-context-b174c491-dfff-4627-b43d-7dac5e03cc35\" is not Running: timed out waiting for the condition",
    }
    pod "security-context-b174c491-dfff-4627-b43d-7dac5e03cc35" is not Running: timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:403
				
				Click to see stdout/stderrfrom junit_38.xml

Find security-context-b174c491-dfff-4627-b43d-7dac5e03cc35 mentions in log files | View test history on testgrid


[sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit when limit is bigger than 0 15m52s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sCSI\smock\svolume\sCSI\svolume\slimit\sinformation\susing\smock\sdriver\sshould\sreport\sattach\slimit\swhen\slimit\sis\sbigger\sthan\s0$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:352
Failed to start pod1: timed out waiting for the condition
Unexpected error:
    <*errors.errorString | 0xc0002b5400>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:371
				
				Click to see stdout/stderrfrom junit_37.xml

Filter through log files | View test history on testgrid


[sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist 9m43s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sCSI\smock\svolume\sCSI\sworkload\sinformation\susing\smock\sdriver\sshould\snot\sbe\spassed\swhen\sCSIDriver\sdoes\snot\sexist$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:327
Failed waiting for PVC to be bound PersistentVolumeClaims [pvc-k64qn] not all in phase Bound within 5m0s
Unexpected error:
    <*errors.errorString | 0xc003af5110>: {
        s: "PersistentVolumeClaims [pvc-k64qn] not all in phase Bound within 5m0s",
    }
    PersistentVolumeClaims [pvc-k64qn] not all in phase Bound within 5m0s
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:616
				
				Click to see stdout/stderrfrom junit_20.xml

Filter through log files | View test history on testgrid


[sig-storage] Detaching volumes should not work when mount is in progress 4m11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sDetaching\svolumes\sshould\snot\swork\swhen\smount\sis\sin\sprogress$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/detach_mounted.go:66
while waiting for volume to be removed from in-use
Unexpected error:
    <*errors.errorString | 0xc00029fb30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/detach_mounted.go:115
				
				Click to see stdout/stderrfrom junit_23.xml

Filter through log files | View test history on testgrid


[sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path 14m58s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\shostPathSymlink\]\s\[Testpattern\:\sInline\-volume\s\(default\sfs\)\]\ssubPath\sshould\ssupport\snon\-existent\spath$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:162
wait for pod "pod-subpath-test-hostpathsymlink-7pf9" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc000263420>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:175
				
				Click to see stdout/stderrfrom junit_03.xml

Find pod-subpath-test-hostpathsymlink-7pf9 mentions in log files | View test history on testgrid


[sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount 15m14s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\shostPathSymlink\]\s\[Testpattern\:\sInline\-volume\s\(default\sfs\)\]\ssubPath\sshould\ssupport\sreadOnly\sdirectory\sspecified\sin\sthe\svolumeMount$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:317
while waiting for hostPath teardown pod to succeed
Unexpected error:
    <*errors.errorString | 0xc000e8bfe0>: {
        s: "Gave up after waiting 5m0s for pod \"hostpath-symlink-prep-provisioning-994\" to be \"success or failure\"",
    }
    Gave up after waiting 5m0s for pod "hostpath-symlink-prep-provisioning-994" to be "success or failure"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:874
				
				Click to see stdout/stderrfrom junit_19.xml

Find to mentions in log files | View test history on testgrid


[sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow] 9m22s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\shostPath\]\s\[Testpattern\:\sInline\-volume\s\(default\sfs\)\]\ssubPath\sshould\ssupport\srestarting\scontainers\susing\sdirectory\sas\ssubpath\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278
while waiting for container to restart
Unexpected error:
    <*errors.errorString | 0xc00027b400>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:825
				
				Click to see stdout/stderrfrom junit_40.xml

Filter through log files | View test history on testgrid


[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path 5m12s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sblock\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\ssubPath\sshould\ssupport\snon\-existent\spath$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:162
Unexpected error:
    <*errors.errorString | 0xc000300490>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:81
				
				Click to see stdout/stderrfrom junit_14.xml

Filter through log files | View test history on testgrid


[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow] 6m15s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sblock\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\svolumeIO\sshould\swrite\sfiles\sof\svarious\ssizes\,\sverify\ssize\,\svalidate\scontent\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_io.go:120
Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.243.221.49 --kubeconfig=/workspace/.kube/config exec --namespace=volumeio-8427 hostexec-gce-scale-cluster-minion-group-4-125p -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-driver-fc4c387f-9eba-4104-ac89-99a4a6d7503c && dd if=/dev/zero of=/tmp/local-driver-fc4c387f-9eba-4104-ac89-99a4a6d7503c/file bs=4096 count=5120 && sudo losetup -f /tmp/local-driver-fc4c387f-9eba-4104-ac89-99a4a6d7503c/file] []  <nil>  error: Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.38/exec/04bf6573eac79963ecdf5e3e8ef8b9f8c963891c0252abf00c42a96e822a6f38/json: read unix @->/var/run/docker.sock: read: connection reset by peer\n [] <nil> 0xc00b6ff440 exit status 1 <nil> <nil> true [0xc003e8e1c0 0xc003e8e1d8 0xc003e8e1f0] [0xc003e8e1c0 0xc003e8e1d8 0xc003e8e1f0] [0xc003e8e1d0 0xc003e8e1e8] [0x9bfaa0 0x9bfaa0] 0xc001cb06c0 <nil>}:\nCommand stdout:\n\nstderr:\nerror: Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.38/exec/04bf6573eac79963ecdf5e3e8ef8b9f8c963891c0252abf00c42a96e822a6f38/json: read unix @->/var/run/docker.sock: read: connection reset by peer\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.243.221.49 --kubeconfig=/workspace/.kube/config exec --namespace=volumeio-8427 hostexec-gce-scale-cluster-minion-group-4-125p -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-driver-fc4c387f-9eba-4104-ac89-99a4a6d7503c && dd if=/dev/zero of=/tmp/local-driver-fc4c387f-9eba-4104-ac89-99a4a6d7503c/file bs=4096 count=5120 && sudo losetup -f /tmp/local-driver-fc4c387f-9eba-4104-ac89-99a4a6d7503c/file] []  <nil>  error: Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.38/exec/04bf6573eac79963ecdf5e3e8ef8b9f8c963891c0252abf00c42a96e822a6f38/json: read unix @->/var/run/docker.sock: read: connection reset by peer
     [] <nil> 0xc00b6ff440 exit status 1 <nil> <nil> true [0xc003e8e1c0 0xc003e8e1d8 0xc003e8e1f0] [0xc003e8e1c0 0xc003e8e1d8 0xc003e8e1f0] [0xc003e8e1d0 0xc003e8e1e8] [0x9bfaa0 0x9bfaa0] 0xc001cb06c0 <nil>}:
    Command stdout:
    
    stderr:
    error: Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.38/exec/04bf6573eac79963ecdf5e3e8ef8b9f8c963891c0252abf00c42a96e822a6f38/json: read unix @->/var/run/docker.sock: read: connection reset by peer
    
    error:
    exit status 1
    
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:134
				
				Click to see stdout/stderrfrom junit_31.xml

Filter through log files | View test history on testgrid


[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume 7m16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sblock\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\svolumes\sshould\sallow\sexec\sof\sfiles\son\sthe\svolume$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:167
Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.243.221.49 --kubeconfig=/workspace/.kube/config exec --namespace=volume-8926 hostexec-gce-scale-cluster-minion-group-4-125p -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(sudo losetup | grep /tmp/local-driver-95db5117-bc14-4bc4-96bf-ca86ef6fa58a/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] []  <nil>  error: Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.38/exec/32d7bc882692a45ee18deb16ae16912b614f2aba64448296997129fb3e3f727d/json: read unix @->/var/run/docker.sock: read: connection reset by peer\n [] <nil> 0xc0057e5b90 exit status 1 <nil> <nil> true [0xc001e80150 0xc001e80168 0xc001e80180] [0xc001e80150 0xc001e80168 0xc001e80180] [0xc001e80160 0xc001e80178] [0x9bfaa0 0x9bfaa0] 0xc00254afc0 <nil>}:\nCommand stdout:\n\nstderr:\nerror: Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.38/exec/32d7bc882692a45ee18deb16ae16912b614f2aba64448296997129fb3e3f727d/json: read unix @->/var/run/docker.sock: read: connection reset by peer\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.243.221.49 --kubeconfig=/workspace/.kube/config exec --namespace=volume-8926 hostexec-gce-scale-cluster-minion-group-4-125p -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(sudo losetup | grep /tmp/local-driver-95db5117-bc14-4bc4-96bf-ca86ef6fa58a/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] []  <nil>  error: Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.38/exec/32d7bc882692a45ee18deb16ae16912b614f2aba64448296997129fb3e3f727d/json: read unix @->/var/run/docker.sock: read: connection reset by peer
     [] <nil> 0xc0057e5b90 exit status 1 <nil> <nil> true [0xc001e80150 0xc001e80168 0xc001e80180] [0xc001e80150 0xc001e80168 0xc001e80180] [0xc001e80160 0xc001e80178] [0x9bfaa0 0x9bfaa0] 0xc00254afc0 <nil>}:
    Command stdout:
    
    stderr:
    error: Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.38/exec/32d7bc882692a45ee18deb16ae16912b614f2aba64448296997129fb3e3f727d/json: read unix @->/var/run/docker.sock: read: connection reset by peer
    
    error:
    exit status 1
    
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:141
				
				Click to see stdout/stderrfrom junit_02.xml

Filter through log files | View test history on testgrid


[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume 5m19s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sdir\-bindmounted\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\svolumes\sshould\sallow\sexec\sof\sfiles\son\sthe\svolume$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:167
Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.243.221.49 --kubeconfig=/workspace/.kube/config exec --namespace=volume-6076 hostexec-gce-scale-cluster-minion-group-4-125p -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-driver-128bf392-3046-47a8-97f3-7eb09d413482 && sudo mount --bind /tmp/local-driver-128bf392-3046-47a8-97f3-7eb09d413482 /tmp/local-driver-128bf392-3046-47a8-97f3-7eb09d413482] []  <nil>  error: unable to upgrade connection: 404 page not found\n [] <nil> 0xc00b554ae0 exit status 1 <nil> <nil> true [0xc0021da2d0 0xc0021da2e8 0xc0021da300] [0xc0021da2d0 0xc0021da2e8 0xc0021da300] [0xc0021da2e0 0xc0021da2f8] [0x9bfaa0 0x9bfaa0] 0xc001cd34a0 <nil>}:\nCommand stdout:\n\nstderr:\nerror: unable to upgrade connection: 404 page not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.243.221.49 --kubeconfig=/workspace/.kube/config exec --namespace=volume-6076 hostexec-gce-scale-cluster-minion-group-4-125p -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-driver-128bf392-3046-47a8-97f3-7eb09d413482 && sudo mount --bind /tmp/local-driver-128bf392-3046-47a8-97f3-7eb09d413482 /tmp/local-driver-128bf392-3046-47a8-97f3-7eb09d413482] []  <nil>  error: unable to upgrade connection: 404 page not found
     [] <nil> 0xc00b554ae0 exit status 1 <nil> <nil> true [0xc0021da2d0 0xc0021da2e8 0xc0021da300] [0xc0021da2d0 0xc0021da2e8 0xc0021da300] [0xc0021da2e0 0xc0021da2f8] [0x9bfaa0 0x9bfaa0] 0xc001cd34a0 <nil>}:
    Command stdout:
    
    stderr:
    error: unable to upgrade connection: 404 page not found
    
    error:
    exit status 1
    
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:239
				
				Click to see stdout/stderrfrom junit_17.xml

Filter through log files | View test history on testgrid


[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should create sc, pod, pv, and pvc, read/write to the pv, and delete all created resources 14m20s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sdir\-bindmounted\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(filesystem\svolmode\)\]\svolumeMode\sshould\screate\ssc\,\spod\,\spv\,\sand\spvc\,\sread\/write\sto\sthe\spv\,\sand\sdelete\sall\screated\sresources$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:194
Unexpected error:
    <*errors.errorString | 0xc002fec850>: {
        s: "pod \"security-context-ce639c87-46fa-4b6a-82a2-4d3bfe24dad4\" was not deleted: timed out waiting for the condition",
    }
    pod "security-context-ce639c87-46fa-4b6a-82a2-4d3bfe24dad4" was not deleted: timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:220
				
				Click to see stdout/stderrfrom junit_27.xml

Find security-context-ce639c87-46fa-4b6a-82a2-4d3bfe24dad4 mentions in log files | View test history on testgrid


[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes 5m12s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sdir\-link\-bindmounted\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\ssubPath\sshould\sverify\scontainer\scannot\swrite\sto\ssubpath\sreadonly\svolumes$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:375
Unexpected error:
    <*errors.errorString | 0xc0002cb410>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:81
				
				Click to see stdout/stderrfrom junit_15.xml

Filter through log files | View test history on testgrid


[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow] 5m40s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sdir\-link\-bindmounted\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\svolumeIO\sshould\swrite\sfiles\sof\svarious\ssizes\,\sverify\ssize\,\svalidate\scontent\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_io.go:120
Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.243.221.49 --kubeconfig=/workspace/.kube/config exec --namespace=volumeio-5518 hostexec-gce-scale-cluster-minion-group-4-125p -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-driver-c7f0f62d-4a72-4a9d-b977-0dade608de3d-backend && sudo mount --bind /tmp/local-driver-c7f0f62d-4a72-4a9d-b977-0dade608de3d-backend /tmp/local-driver-c7f0f62d-4a72-4a9d-b977-0dade608de3d-backend && sudo ln -s /tmp/local-driver-c7f0f62d-4a72-4a9d-b977-0dade608de3d-backend /tmp/local-driver-c7f0f62d-4a72-4a9d-b977-0dade608de3d] []  <nil>  error: Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.38/containers/cd5b20ea9abf5efa327d106b727194cfec4719bbe1b71c7f59f0b14865c3d362/json: read unix @->/var/run/docker.sock: read: connection reset by peer\n [] <nil> 0xc001fc28d0 exit status 1 <nil> <nil> true [0xc00164e548 0xc00164e588 0xc00164e620] [0xc00164e548 0xc00164e588 0xc00164e620] [0xc00164e580 0xc00164e5f8] [0x9bfaa0 0x9bfaa0] 0xc0015c9440 <nil>}:\nCommand stdout:\n\nstderr:\nerror: Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.38/containers/cd5b20ea9abf5efa327d106b727194cfec4719bbe1b71c7f59f0b14865c3d362/json: read unix @->/var/run/docker.sock: read: connection reset by peer\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.243.221.49 --kubeconfig=/workspace/.kube/config exec --namespace=volumeio-5518 hostexec-gce-scale-cluster-minion-group-4-125p -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-driver-c7f0f62d-4a72-4a9d-b977-0dade608de3d-backend && sudo mount --bind /tmp/local-driver-c7f0f62d-4a72-4a9d-b977-0dade608de3d-backend /tmp/local-driver-c7f0f62d-4a72-4a9d-b977-0dade608de3d-backend && sudo ln -s /tmp/local-driver-c7f0f62d-4a72-4a9d-b977-0dade608de3d-backend /tmp/local-driver-c7f0f62d-4a72-4a9d-b977-0dade608de3d] []  <nil>  error: Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.38/containers/cd5b20ea9abf5efa327d106b727194cfec4719bbe1b71c7f59f0b14865c3d362/json: read unix @->/var/run/docker.sock: read: connection reset by peer
     [] <nil> 0xc001fc28d0 exit status 1 <nil> <nil> true [0xc00164e548 0xc00164e588 0xc00164e620] [0xc00164e548 0xc00164e588 0xc00164e620] [0xc00164e580 0xc00164e5f8] [0x9bfaa0 0x9bfaa0] 0xc0015c9440 <nil>}:
    Command stdout:
    
    stderr:
    error: Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.38/containers/cd5b20ea9abf5efa327d106b727194cfec4719bbe1b71c7f59f0b14865c3d362/json: read unix @->/var/run/docker.sock: read: connection reset by peer
    
    error:
    exit status 1
    
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:259
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted 13m28s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sdir\-link\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\ssubPath\sshould\sbe\sable\sto\sunmount\safter\sthe\ssubpath\sdirectory\sis\sdeleted$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
while deleting pod
Unexpected error:
    <*errors.errorString | 0xc0001d7580>: {
        s: "pod \"pod-subpath-test-local-preprovisionedpv-9np5\" was not deleted: timed out waiting for the condition",
    }
    pod "pod-subpath-test-local-preprovisionedpv-9np5" was not deleted: timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:147
				
				Click to see stdout/stderrfrom junit_21.xml

Find pod-subpath-test-local-preprovisionedpv-9np5 mentions in log files | View test history on testgrid


[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow] 5m35s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sdir\-link\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\ssubPath\sshould\sfail\sif\snon\-existent\ssubpath\sis\soutside\sthe\svolume\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.243.221.49 --kubeconfig=/workspace/.kube/config exec --namespace=provisioning-5119 hostexec-gce-scale-cluster-minion-group-4-125p -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-driver-ad2764f5-7fba-4956-add4-41ed735bd3ae-backend && sudo ln -s /tmp/local-driver-ad2764f5-7fba-4956-add4-41ed735bd3ae-backend /tmp/local-driver-ad2764f5-7fba-4956-add4-41ed735bd3ae] []  <nil>  error: Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.38/exec/8cce4aa272bb243b05446413cef7b5d5faf1f2fc6d2dc0c67e405abe9cdb31f8/json: read unix @->/var/run/docker.sock: read: connection reset by peer\n [] <nil> 0xc00a25ca50 exit status 1 <nil> <nil> true [0xc0022ac0c0 0xc0022ac0d8 0xc0022ac0f0] [0xc0022ac0c0 0xc0022ac0d8 0xc0022ac0f0] [0xc0022ac0d0 0xc0022ac0e8] [0x9bfaa0 0x9bfaa0] 0xc00235bec0 <nil>}:\nCommand stdout:\n\nstderr:\nerror: Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.38/exec/8cce4aa272bb243b05446413cef7b5d5faf1f2fc6d2dc0c67e405abe9cdb31f8/json: read unix @->/var/run/docker.sock: read: connection reset by peer\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.243.221.49 --kubeconfig=/workspace/.kube/config exec --namespace=provisioning-5119 hostexec-gce-scale-cluster-minion-group-4-125p -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-driver-ad2764f5-7fba-4956-add4-41ed735bd3ae-backend && sudo ln -s /tmp/local-driver-ad2764f5-7fba-4956-add4-41ed735bd3ae-backend /tmp/local-driver-ad2764f5-7fba-4956-add4-41ed735bd3ae] []  <nil>  error: Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.38/exec/8cce4aa272bb243b05446413cef7b5d5faf1f2fc6d2dc0c67e405abe9cdb31f8/json: read unix @->/var/run/docker.sock: read: connection reset by peer
     [] <nil> 0xc00a25ca50 exit status 1 <nil> <nil> true [0xc0022ac0c0 0xc0022ac0d8 0xc0022ac0f0] [0xc0022ac0c0 0xc0022ac0d8 0xc0022ac0f0] [0xc0022ac0d0 0xc0022ac0e8] [0x9bfaa0 0x9bfaa0] 0xc00235bec0 <nil>}:
    Command stdout:
    
    stderr:
    error: Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.38/exec/8cce4aa272bb243b05446413cef7b5d5faf1f2fc6d2dc0c67e405abe9cdb31f8/json: read unix @->/var/run/docker.sock: read: connection reset by peer
    
    error:
    exit status 1
    
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:219
				
				Click to see stdout/stderrfrom junit_06.xml

Filter through log files | View test history on testgrid


[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should be mountable 17m58s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sdir\-link\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\svolumes\sshould\sbe\smountable$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:136
Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.243.221.49 --kubeconfig=/workspace/.kube/config exec local-client --namespace=volume-3483 -- cat /opt/0/index.html] []  <nil>  error: Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.38/exec/ad6a16c7efe34e6d4fea7c1ac2b90a5e0698e213c40641378146562cbc762545/json: read unix @->/var/run/docker.sock: read: connection reset by peer\n [] <nil> 0xc00292d830 exit status 1 <nil> <nil> true [0xc001aa0678 0xc001aa0690 0xc001aa06b0] [0xc001aa0678 0xc001aa0690 0xc001aa06b0] [0xc001aa0688 0xc001aa06a8] [0x9bfaa0 0x9bfaa0] 0xc0020a4ae0 <nil>}:\nCommand stdout:\n\nstderr:\nerror: Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.38/exec/ad6a16c7efe34e6d4fea7c1ac2b90a5e0698e213c40641378146562cbc762545/json: read unix @->/var/run/docker.sock: read: connection reset by peer\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.243.221.49 --kubeconfig=/workspace/.kube/config exec local-client --namespace=volume-3483 -- cat /opt/0/index.html] []  <nil>  error: Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.38/exec/ad6a16c7efe34e6d4fea7c1ac2b90a5e0698e213c40641378146562cbc762545/json: read unix @->/var/run/docker.sock: read: connection reset by peer
     [] <nil> 0xc00292d830 exit status 1 <nil> <nil> true [0xc001aa0678 0xc001aa0690 0xc001aa06b0] [0xc001aa0678 0xc001aa0690 0xc001aa06b0] [0xc001aa0688 0xc001aa06a8] [0x9bfaa0 0x9bfaa0] 0xc0020a4ae0 <nil>}:
    Command stdout:
    
    stderr:
    error: Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.38/exec/ad6a16c7efe34e6d4fea7c1ac2b90a5e0698e213c40641378146562cbc762545/json: read unix @->/var/run/docker.sock: read: connection reset by peer
    
    error:
    exit status 1
    
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2306
				
				Click to see stdout/stderrfrom junit_25.xml

Filter through log files | View test history on testgrid


[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow] 16m10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sdir\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\ssubPath\sshould\ssupport\srestarting\scontainers\susing\sdirectory\sas\ssubpath\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278
while failing liveness probe
Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.243.221.49 --kubeconfig=/workspace/.kube/config exec --namespace=provisioning-1615 pod-subpath-test-local-preprovisionedpv-w4mb --container test-container-volume-local-preprovisionedpv-w4mb -- /bin/sh -c rm /probe-volume/probe-file] []  <nil>  error: Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.38/exec/70c7e1e06f5d71ffa363b5ef3f5b7eb5532b63127ce4386193647700cbf8349b/json: read unix @->/var/run/docker.sock: read: connection reset by peer\n [] <nil> 0xc0044e3800 exit status 1 <nil> <nil> true [0xc009a62098 0xc009a620b0 0xc009a620c8] [0xc009a62098 0xc009a620b0 0xc009a620c8] [0xc009a620a8 0xc009a620c0] [0x9bfaa0 0x9bfaa0] 0xc001a388a0 <nil>}:\nCommand stdout:\n\nstderr:\nerror: Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.38/exec/70c7e1e06f5d71ffa363b5ef3f5b7eb5532b63127ce4386193647700cbf8349b/json: read unix @->/var/run/docker.sock: read: connection reset by peer\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.243.221.49 --kubeconfig=/workspace/.kube/config exec --namespace=provisioning-1615 pod-subpath-test-local-preprovisionedpv-w4mb --container test-container-volume-local-preprovisionedpv-w4mb -- /bin/sh -c rm /probe-volume/probe-file] []  <nil>  error: Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.38/exec/70c7e1e06f5d71ffa363b5ef3f5b7eb5532b63127ce4386193647700cbf8349b/json: read unix @->/var/run/docker.sock: read: connection reset by peer
     [] <nil> 0xc0044e3800 exit status 1 <nil> <nil> true [0xc009a62098 0xc009a620b0 0xc009a620c8] [0xc009a62098 0xc009a620b0 0xc009a620c8] [0xc009a620a8 0xc009a620c0] [0x9bfaa0 0x9bfaa0] 0xc001a388a0 <nil>}:
    Command stdout:
    
    stderr:
    error: Internal error occurred: error executing command in container: error during connect: Get http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.38/exec/70c7e1e06f5d71ffa363b5ef3f5b7eb5532b63127ce4386193647700cbf8349b/json: read unix @->/var/run/docker.sock: read: connection reset by peer
    
    error:
    exit status 1
    
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:803
				
				Click to see stdout/stderrfrom junit_34.xml

Filter through log files | View test history on testgrid


[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes 5m10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\stmpfs\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\ssubPath\sshould\sverify\scontainer\scannot\swrite\sto\ssubpath\sreadonly\svolumes$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:375
Unexpected error:
    <*errors.errorString | 0xc0002ad400>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:81
				
				Click to see stdout/stderrfrom junit_12.xml

Filter through log files | View test history on testgrid


[sig-storage] PersistentVolumes Default StorageClass pods that use multiple volumes should be reschedulable [Slow] 12s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sPersistentVolumes\sDefault\sStorageClass\spods\sthat\suse\smultiple\svolumes\sshould\sbe\sreschedulable\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:312
Unexpected error:
    <*errors.errorString | 0xc013c05fd0>: {
        s: "pod Create API error: Pod \"security-context-fca90994-a9d3-426f-bc03-54125451499a\" is invalid: [spec.volumes[0].persistentVolumeClaim.claimName: Required value, spec.volumes[1].persistentVolumeClaim.claimName: Required value, spec.volumes[2].persistentVolumeClaim.claimName: Required value, spec.volumes[3].persistentVolumeClaim.claimName: Required value, spec.containers[0].volumeMounts[0].name: Not found: \"volume1\", spec.containers[0].volumeMounts[1].name: Not found: \"volume2\", spec.containers[0].volumeMounts[2].name: Not found: \"volume3\", spec.containers[0].volumeMounts[3].name: Not found: \"volume4\"]",
    }
    pod Create API error: Pod "security-context-fca90994-a9d3-426f-bc03-54125451499a" is invalid: [spec.volumes[0].persistentVolumeClaim.claimName: Required value, spec.volumes[1].persistentVolumeClaim.claimName: Required value, spec.volumes[2].persistentVolumeClaim.claimName: Required value, spec.volumes[3].persistentVolumeClaim.claimName: Required value, spec.containers[0].volumeMounts[0].name: Not found: "volume1", spec.containers[0].volumeMounts[1].name: Not found: "volume2", spec.containers[0].volumeMounts[2].name: Not found: "volume3", spec.containers[0].volumeMounts[3].name: Not found: "volume4"]
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:329
				
				Click to see stdout/stderrfrom junit_36.xml

Find Create mentions in log files | View test history on testgrid


[sig-storage] PersistentVolumes Default StorageClass pods that use multiple volumes should be reschedulable [Slow] 12s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sPersistentVolumes\sDefault\sStorageClass\spods\sthat\suse\smultiple\svolumes\sshould\sbe\sreschedulable\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:312
Unexpected error:
    <*errors.errorString | 0xc002609320>: {
        s: "pod Create API error: Pod \"security-context-fa45db6e-dbd2-4a70-b7a9-0ce0b39c7913\" is invalid: [spec.volumes[0].persistentVolumeClaim.claimName: Required value, spec.volumes[1].persistentVolumeClaim.claimName: Required value, spec.volumes[2].persistentVolumeClaim.claimName: Required value, spec.volumes[3].persistentVolumeClaim.claimName: Required value, spec.containers[0].volumeMounts[0].name: Not found: \"volume1\", spec.containers[0].volumeMounts[1].name: Not found: \"volume2\", spec.containers[0].volumeMounts[2].name: Not found: \"volume3\", spec.containers[0].volumeMounts[3].name: Not found: \"volume4\"]",
    }
    pod Create API error: Pod "security-context-fa45db6e-dbd2-4a70-b7a9-0ce0b39c7913" is invalid: [spec.volumes[0].persistentVolumeClaim.claimName: Required value, spec.volumes[1].persistentVolumeClaim.claimName: Required value, spec.volumes[2].persistentVolumeClaim.claimName: Required value, spec.volumes[3].persistentVolumeClaim.claimName: Required value, spec.containers[0].volumeMounts[0].name: Not found: "volume1", spec.containers[0].volumeMounts[1].name: Not found: "volume2", spec.containers[0].volumeMounts[2].name: Not found: "volume3", spec.containers[0].volumeMounts[3].name: Not found: "volume4"]
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:329
				
				Click to see stdout/stderrfrom junit_36.xml

Find Create mentions in log files | View test history on testgrid


Show 991 Passed Tests

Show 3151 Skipped Tests