This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 3 failed / 1055 succeeded
Started2019-08-19 07:22
Elapsed1h35m
Revision
Buildergke-prow-ssd-pool-1a225945-g951
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/0b31bd8b-a6ce-4faf-b623-b94b81da7c17/targets/test'}}
podf3cba1e5-c251-11e9-be5c-ee22131cc068
resultstorehttps://source.cloud.google.com/results/invocations/0b31bd8b-a6ce-4faf-b623-b94b81da7c17/targets/test
infra-commit91da08744
job-versionv1.15.3-beta.0.70+2d3c76f9091b6b
master_os_imagecos-73-11647-163-0
node_os_imagecos-73-11647-163-0
podf3cba1e5-c251-11e9-be5c-ee22131cc068
revisionv1.15.3-beta.0.70+2d3c76f9091b6b

Test Failures


Kubernetes e2e suite [sig-cli] Kubectl client [k8s.io] Simple pod should handle in-cluster config 36s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\s\[k8s\.io\]\sSimple\spod\sshould\shandle\sin\-cluster\sconfig$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:621
Expected
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{../../../../kubernetes_skew/cluster/kubectl.sh [../../../../kubernetes_skew/cluster/kubectl.sh --server=https://104.196.229.193 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-363 nginx -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1] []  <nil> I0819 08:28:23.910403      95 merged_client_builder.go:164] Using in-cluster namespace\nI0819 08:28:23.913926      95 merged_client_builder.go:122] Using in-cluster configuration\nI0819 08:28:23.927572      95 merged_client_builder.go:122] Using in-cluster configuration\nI0819 08:28:23.946648      95 merged_client_builder.go:122] Using in-cluster configuration\nI0819 08:28:23.947490      95 round_trippers.go:448] GET https://10.0.0.1:443/api/v1/namespaces/kubectl-363/pods?limit=500\nI0819 08:28:23.947520      95 round_trippers.go:455] Request Headers:\nI0819 08:28:23.947546      95 round_trippers.go:459]     Accept: application/json;as=Table;v=v1beta1;g=meta.k8s.io, application/json\nI0819 08:28:23.947566      95 round_trippers.go:459]     User-Agent: kubectl/v1.16.0 (linux/amd64) kubernetes/a5d968b\nI0819 08:28:23.947575      95 round_trippers.go:459]     Authorization: Bearer <masked>\nI0819 08:28:23.955094      95 round_trippers.go:474] Response Status: 401 Unauthorized in 7 milliseconds\nI0819 08:28:23.955540      95 helpers.go:199] server response object: [{\n  \"kind\": \"Status\",\n  \"apiVersion\": \"v1\",\n  \"metadata\": {},\n  \"status\": \"Failure\",\n  \"message\": \"Unauthorized\",\n  \"reason\": \"Unauthorized\",\n  \"code\": 401\n}]\nF0819 08:28:23.955594      95 helpers.go:114] error: You must be logged in to the server (Unauthorized)\n + /tmp/kubectl get pods '--token=invalid' '--v=7'\ncommand terminated with exit code 255\n [] <nil> 0xc003030f90 exit status 255 <nil> <nil> true [0xc0034d2a10 0xc0034d2a28 0xc0034d2a40] [0xc0034d2a10 0xc0034d2a28 0xc0034d2a40] [0xc0034d2a20 0xc0034d2a38] [0x9d21f0 0x9d21f0] 0xc00259a4e0 <nil>}:\nCommand stdout:\nI0819 08:28:23.910403      95 merged_client_builder.go:164] Using in-cluster namespace\nI0819 08:28:23.913926      95 merged_client_builder.go:122] Using in-cluster configuration\nI0819 08:28:23.927572      95 merged_client_builder.go:122] Using in-cluster configuration\nI0819 08:28:23.946648      95 merged_client_builder.go:122] Using in-cluster configuration\nI0819 08:28:23.947490      95 round_trippers.go:448] GET https://10.0.0.1:443/api/v1/namespaces/kubectl-363/pods?limit=500\nI0819 08:28:23.947520      95 round_trippers.go:455] Request Headers:\nI0819 08:28:23.947546      95 round_trippers.go:459]     Accept: application/json;as=Table;v=v1beta1;g=meta.k8s.io, application/json\nI0819 08:28:23.947566      95 round_trippers.go:459]     User-Agent: kubectl/v1.16.0 (linux/amd64) kubernetes/a5d968b\nI0819 08:28:23.947575      95 round_trippers.go:459]     Authorization: Bearer <masked>\nI0819 08:28:23.955094      95 round_trippers.go:474] Response Status: 401 Unauthorized in 7 milliseconds\nI0819 08:28:23.955540      95 helpers.go:199] server response object: [{\n  \"kind\": \"Status\",\n  \"apiVersion\": \"v1\",\n  \"metadata\": {},\n  \"status\": \"Failure\",\n  \"message\": \"Unauthorized\",\n  \"reason\": \"Unauthorized\",\n  \"code\": 401\n}]\nF0819 08:28:23.955594      95 helpers.go:114] error: You must be logged in to the server (Unauthorized)\n\nstderr:\n+ /tmp/kubectl get pods '--token=invalid' '--v=7'\ncommand terminated with exit code 255\n\nerror:\nexit status 255",
        },
        Code: 255,
    }
to contain substring
    <string>: Authorization: Bearer invalid
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:721
				
				Click to see stdout/stderrfrom junit_05.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node 5m44s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\snfs\]\s\[Testpattern\:\sDynamic\sPV\s\(filesystem\svolmode\)\]\smultiVolume\s\[Slow\]\sshould\saccess\sto\stwo\svolumes\swith\sthe\ssame\svolume\smode\sand\sretain\sdata\sacross\spod\srecreation\son\sdifferent\snode$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:147
Unexpected error:
    <*errors.errorString | 0xc0021a5380>: {
        s: "PersistentVolumeClaims [pvc-s8zwh] not all in phase Bound within 5m0s",
    }
    PersistentVolumeClaims [pvc-s8zwh] not all in phase Bound within 5m0s
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:346
				
				Click to see stdout/stderrfrom junit_02.xml

Filter through log files | View test history on testgrid


Test 53m26s

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kubectl-path=../../../../kubernetes_skew/cluster/kubectl.sh --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 1055 Passed Tests

Show 8222 Skipped Tests