This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 5 failed / 170 succeeded
Started2020-01-13 09:06
Elapsed7h8m
Revision
Buildergke-prow-default-pool-cf4891d4-k31k
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/6268f0a9-7b9e-4a43-a43b-7394c0d5d579/targets/test'}}
podd73c84de-35e3-11ea-9fef-d200904e1a96
resultstorehttps://source.cloud.google.com/results/invocations/6268f0a9-7b9e-4a43-a43b-7394c0d5d579/targets/test
infra-commitd6212fa62
job-versionv1.15.8-beta.1.30+14ede42c4fe699
master_os_imagecos-73-11647-163-0
node_os_imageubuntu-gke-1804-d1809-0-v20200110
podd73c84de-35e3-11ea-9fef-d200904e1a96
revisionv1.15.8-beta.1.30+14ede42c4fe699

Test Failures


Kubernetes e2e suite [sig-autoscaling] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed 7m11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\sDNS\shorizontal\sautoscaling\s\[Serial\]\s\[Slow\]\skube\-dns\-autoscaler\sshould\sscale\skube\-dns\spods\swhen\scluster\ssize\schanged$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/dns_autoscaling.go:103
Unexpected error:
    <*errors.errorString | 0xc0014b2580>: {
        s: "err waiting for DNS replicas to satisfy 3, got 4: timed out waiting for the condition",
    }
    err waiting for DNS replicas to satisfy 3, got 4: timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/dns_autoscaling.go:146
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)] volumes should allow exec of files on the volume 5m37s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\spd\.csi\.storage\.gke\.io\]\[Serial\]\s\[Testpattern\:\sDynamic\sPV\s\(xfs\)\]\svolumes\sshould\sallow\sexec\sof\sfiles\son\sthe\svolume$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:173
Unexpected error:
    <*errors.errorString | 0xc00305bfb0>: {
        s: "expected pod \"exec-volume-test-pd-csi-storage-gke-io-dynamicpv-jmfq\" success: Gave up after waiting 5m0s for pod \"exec-volume-test-pd-csi-storage-gke-io-dynamicpv-jmfq\" to be \"success or failure\"",
    }
    expected pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-jmfq" success: Gave up after waiting 5m0s for pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-jmfq" to be "success or failure"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2342
				
				Click to see stdout/stderrfrom junit_01.xml

Find exec-volume-test-pd-csi-storage-gke-io-dynamicpv-jmfq mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)] volumes should be mountable 5m30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\spd\.csi\.storage\.gke\.io\]\[Serial\]\s\[Testpattern\:\sDynamic\sPV\s\(xfs\)\]\svolumes\sshould\sbe\smountable$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:142
Unexpected error:
    <*errors.errorString | 0xc00264fe10>: {
        s: "Gave up after waiting 5m0s for pod \"gcepd-injector-l2hv\" to be \"success or failure\"",
    }
    Gave up after waiting 5m0s for pod "gcepd-injector-l2hv" to be "success or failure"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:570
				
				Click to see stdout/stderrfrom junit_01.xml

Find gcepd-injector-l2hv mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] Pod Disks detach in a disrupted environment [Slow] [Disruptive] when node is deleted 11m27s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sPod\sDisks\sdetach\sin\sa\sdisrupted\senvironment\s\[Slow\]\s\[Disruptive\]\swhen\snode\sis\sdeleted$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:336
Requires current node count (4) to return to original node count (3)
Expected
    <int>: 4
to equal
    <int>: 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:401
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Test 6h52m

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\] --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 170 Passed Tests

Show 4250 Skipped Tests