This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 8 failed / 274 succeeded
Started2019-10-08 17:23
Elapsed50m10s
Revision
Buildergke-prow-ssd-pool-1a225945-xk3x
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/d0acb21d-1a0f-4173-baf6-5a4061c8c841/targets/test'}}
pod53ecac02-e9f0-11e9-9fdb-7ed7fc4215ec
resultstorehttps://source.cloud.google.com/results/invocations/d0acb21d-1a0f-4173-baf6-5a4061c8c841/targets/test
infra-commitb2bb51680
job-versionv1.17.0-alpha.1.191+1501de6a560dd7
master_os_imagecos-73-11647-163-0
node_os_imagecos-73-11647-163-0
pod53ecac02-e9f0-11e9-9fdb-7ed7fc4215ec
revisionv1.17.0-alpha.1.191+1501de6a560dd7

Test Failures


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node 3m8s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sgcepd\]\s\[Testpattern\:\sDynamic\sPV\s\(block\svolmode\)\]\smultiVolume\s\[Slow\]\sshould\saccess\sto\stwo\svolumes\swith\sdifferent\svolume\smode\sand\sretain\sdata\sacross\spod\srecreation\son\sthe\ssame\snode$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:194
Oct  8 17:39:11.432: "echo 9RHwF9jSP+LcR4ReCV2bmq1Q1uH+OEBDZ8ngfJ61iOtOK1a/BgqauYOgzuyHvk2GqWDjEHeFCUeuRg3Pk/Ws4Q== | base64 -d | sha256sum" should succeed, but failed with exit code 1 and error message "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.199.35.180 --kubeconfig=/workspace/.kube/config exec --namespace=multivolume-9260 security-context-a29f2403-94fc-4c7e-a695-eb1c3d386324 -- /bin/sh -c echo 9RHwF9jSP+LcR4ReCV2bmq1Q1uH+OEBDZ8ngfJ61iOtOK1a/BgqauYOgzuyHvk2GqWDjEHeFCUeuRg3Pk/Ws4Q== | base64 -d | sha256sum] []  <nil>  Error from server: error dialing backend: dial tcp 10.132.0.4:10250: i/o timeout\n [] <nil> 0xc001701fb0 exit status 1 <nil> <nil> true [0xc0036c3a10 0xc0036c3a28 0xc0036c3a40] [0xc0036c3a10 0xc0036c3a28 0xc0036c3a40] [0xc0036c3a20 0xc0036c3a38] [0x10f1330 0x10f1330] 0xc002831f80 <nil>}:\nCommand stdout:\n\nstderr:\nError from server: error dialing backend: dial tcp 10.132.0.4:10250: i/o timeout\n\nerror:\nexit status 1"
Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.199.35.180 --kubeconfig=/workspace/.kube/config exec --namespace=multivolume-9260 security-context-a29f2403-94fc-4c7e-a695-eb1c3d386324 -- /bin/sh -c echo 9RHwF9jSP+LcR4ReCV2bmq1Q1uH+OEBDZ8ngfJ61iOtOK1a/BgqauYOgzuyHvk2GqWDjEHeFCUeuRg3Pk/Ws4Q== | base64 -d | sha256sum] []  <nil>  Error from server: error dialing backend: dial tcp 10.132.0.4:10250: i/o timeout\n [] <nil> 0xc001701fb0 exit status 1 <nil> <nil> true [0xc0036c3a10 0xc0036c3a28 0xc0036c3a40] [0xc0036c3a10 0xc0036c3a28 0xc0036c3a40] [0xc0036c3a20 0xc0036c3a38] [0x10f1330 0x10f1330] 0xc002831f80 <nil>}:\nCommand stdout:\n\nstderr:\nError from server: error dialing backend: dial tcp 10.132.0.4:10250: i/o timeout\n\nerror:\nexit status 1",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.199.35.180 --kubeconfig=/workspace/.kube/config exec --namespace=multivolume-9260 security-context-a29f2403-94fc-4c7e-a695-eb1c3d386324 -- /bin/sh -c echo 9RHwF9jSP+LcR4ReCV2bmq1Q1uH+OEBDZ8ngfJ61iOtOK1a/BgqauYOgzuyHvk2GqWDjEHeFCUeuRg3Pk/Ws4Q== | base64 -d | sha256sum] []  <nil>  Error from server: error dialing backend: dial tcp 10.132.0.4:10250: i/o timeout
     [] <nil> 0xc001701fb0 exit status 1 <nil> <nil> true [0xc0036c3a10 0xc0036c3a28 0xc0036c3a40] [0xc0036c3a10 0xc0036c3a28 0xc0036c3a40] [0xc0036c3a20 0xc0036c3a38] [0x10f1330 0x10f1330] 0xc002831f80 <nil>}:
    Command stdout:
    
    stderr:
    Error from server: error dialing backend: dial tcp 10.132.0.4:10250: i/o timeout
    
    error:
    exit status 1
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/utils.go:75
				
				Click to see stdout/stderrfrom junit_20.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow] 5m35s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sdir\-link\-bindmounted\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\ssubPath\sshould\ssupport\screating\smultiple\ssubpath\sfrom\ssame\svolumes\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:276
Oct  8 17:44:02.521: Unexpected error:
    <*errors.errorString | 0xc0015401a0>: {
        s: "expected pod \"pod-subpath-test-local-preprovisionedpv-sfzv\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-local-preprovisionedpv-sfzv\" to be \"success or failure\"",
    }
    expected pod "pod-subpath-test-local-preprovisionedpv-sfzv" success: Gave up after waiting 5m0s for pod "pod-subpath-test-local-preprovisionedpv-sfzv" to be "success or failure"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1219
				
				Click to see stdout/stderrfrom junit_08.xml

Find pod-subpath-test-local-preprovisionedpv-sfzv mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow] 5m34s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sdir\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\ssubPath\sshould\ssupport\screating\smultiple\ssubpath\sfrom\ssame\svolumes\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:276
Oct  8 17:43:43.466: Unexpected error:
    <*errors.errorString | 0xc001b07f90>: {
        s: "expected pod \"pod-subpath-test-local-preprovisionedpv-qtnc\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-local-preprovisionedpv-qtnc\" to be \"success or failure\"",
    }
    expected pod "pod-subpath-test-local-preprovisionedpv-qtnc" success: Gave up after waiting 5m0s for pod "pod-subpath-test-local-preprovisionedpv-qtnc" to be "success or failure"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1219
				
				Click to see stdout/stderrfrom junit_11.xml

Find pod-subpath-test-local-preprovisionedpv-qtnc mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow] 5m57s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\snfs\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\ssubPath\sshould\ssupport\screating\smultiple\ssubpath\sfrom\ssame\svolumes\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:276
Oct  8 17:36:22.049: Unexpected error:
    <*errors.errorString | 0xc0027c8770>: {
        s: "expected pod \"pod-subpath-test-nfs-dynamicpv-2zp8\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-nfs-dynamicpv-2zp8\" to be \"success or failure\"",
    }
    expected pod "pod-subpath-test-nfs-dynamicpv-2zp8" success: Gave up after waiting 5m0s for pod "pod-subpath-test-nfs-dynamicpv-2zp8" to be "success or failure"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1219
				
				Click to see stdout/stderrfrom junit_14.xml

Find pod-subpath-test-nfs-dynamicpv-2zp8 mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow] 10m43s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\snfs\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\ssubPath\sshould\sverify\scontainer\scannot\swrite\sto\ssubpath\sreadonly\svolumes\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:404
Oct  8 17:36:15.097: while waiting for volume init pod to succeed
Unexpected error:
    <*errors.errorString | 0xc0036d0340>: {
        s: "Gave up after waiting 5m0s for pod \"volume-prep-provisioning-4905\" to be \"success or failure\"",
    }
    Gave up after waiting 5m0s for pod "volume-prep-provisioning-4905" to be "success or failure"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:910
				
				Click to see stdout/stderrfrom junit_06.xml

Find to mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node 5m29s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\snfs\]\s\[Testpattern\:\sDynamic\sPV\s\(filesystem\svolmode\)\]\smultiVolume\s\[Slow\]\sshould\saccess\sto\stwo\svolumes\swith\sthe\ssame\svolume\smode\sand\sretain\sdata\sacross\spod\srecreation\son\sthe\ssame\snode$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:124
Oct  8 17:41:45.543: Unexpected error:
    <*errors.errorString | 0xc0028ded50>: {
        s: "pod \"security-context-9e692ff8-e1d6-4beb-a5b3-ce0d17f93a6c\" is not Running: timed out waiting for the condition",
    }
    pod "security-context-9e692ff8-e1d6-4beb-a5b3-ce0d17f93a6c" is not Running: timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:346
				
				Click to see stdout/stderrfrom junit_14.xml

Find security-context-9e692ff8-e1d6-4beb-a5b3-ce0d17f93a6c mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node 5m44s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\snfs\]\s\[Testpattern\:\sDynamic\sPV\s\(filesystem\svolmode\)\]\smultiVolume\s\[Slow\]\sshould\sconcurrently\saccess\sthe\ssingle\svolume\sfrom\spods\son\sdifferent\snode$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:306
Oct  8 17:42:25.164: Unexpected error:
    <*errors.errorString | 0xc0015b49d0>: {
        s: "pod \"security-context-329811cd-d9fc-4d33-a360-545611c4d5c2\" is not Running: timed out waiting for the condition",
    }
    pod "security-context-329811cd-d9fc-4d33-a360-545611c4d5c2" is not Running: timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:420
				
				Click to see stdout/stderrfrom junit_05.xml

Find security-context-329811cd-d9fc-4d33-a360-545611c4d5c2 mentions in log files | View test history on testgrid


Test 30m52s

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Slow\] --ginkgo.skip=\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 274 Passed Tests

Show 4816 Skipped Tests