This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 3 failed / 1055 succeeded
Started2019-08-19 13:25
Elapsed1h38m
Revision
Buildergke-prow-ssd-pool-1a225945-m2ml
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/97b467dc-3341-453d-9542-184673a0228b/targets/test'}}
poda9a72f37-c284-11e9-be5c-ee22131cc068
resultstorehttps://source.cloud.google.com/results/invocations/97b467dc-3341-453d-9542-184673a0228b/targets/test
infra-commit6763b35b4
job-versionv1.15.3-beta.0.70+2d3c76f9091b6b
master_os_imagecos-73-11647-163-0
node_os_imagecos-73-11647-163-0
poda9a72f37-c284-11e9-be5c-ee22131cc068
revisionv1.15.3-beta.0.70+2d3c76f9091b6b

Test Failures


Kubernetes e2e suite [sig-cli] Kubectl client [k8s.io] Simple pod should handle in-cluster config 1m29s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\s\[k8s\.io\]\sSimple\spod\sshould\shandle\sin\-cluster\sconfig$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:621
Expected
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{../../../../kubernetes_skew/cluster/kubectl.sh [../../../../kubernetes_skew/cluster/kubectl.sh --server=https://35.247.35.233 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-2030 nginx -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1] []  <nil> I0819 14:16:33.631752      95 merged_client_builder.go:164] Using in-cluster namespace\nI0819 14:16:33.632023      95 merged_client_builder.go:122] Using in-cluster configuration\nI0819 14:16:33.641698      95 merged_client_builder.go:122] Using in-cluster configuration\nI0819 14:16:33.659630      95 merged_client_builder.go:122] Using in-cluster configuration\nI0819 14:16:33.660060      95 round_trippers.go:448] GET https://10.0.0.1:443/api/v1/namespaces/kubectl-2030/pods?limit=500\nI0819 14:16:33.660074      95 round_trippers.go:455] Request Headers:\nI0819 14:16:33.660082      95 round_trippers.go:459]     Authorization: Bearer <masked>\nI0819 14:16:33.660089      95 round_trippers.go:459]     Accept: application/json;as=Table;v=v1beta1;g=meta.k8s.io, application/json\nI0819 14:16:33.660095      95 round_trippers.go:459]     User-Agent: kubectl/v1.16.0 (linux/amd64) kubernetes/a5d968b\nI0819 14:16:33.739652      95 round_trippers.go:474] Response Status: 401 Unauthorized in 79 milliseconds\nI0819 14:16:33.740168      95 helpers.go:199] server response object: [{\n  \"kind\": \"Status\",\n  \"apiVersion\": \"v1\",\n  \"metadata\": {},\n  \"status\": \"Failure\",\n  \"message\": \"Unauthorized\",\n  \"reason\": \"Unauthorized\",\n  \"code\": 401\n}]\nF0819 14:16:33.740200      95 helpers.go:114] error: You must be logged in to the server (Unauthorized)\n + /tmp/kubectl get pods '--token=invalid' '--v=7'\ncommand terminated with exit code 255\n [] <nil> 0xc00322e690 exit status 255 <nil> <nil> true [0xc001b24550 0xc001b24570 0xc001b245a0] [0xc001b24550 0xc001b24570 0xc001b245a0] [0xc001b24560 0xc001b24590] [0x9d21f0 0x9d21f0] 0xc003135620 <nil>}:\nCommand stdout:\nI0819 14:16:33.631752      95 merged_client_builder.go:164] Using in-cluster namespace\nI0819 14:16:33.632023      95 merged_client_builder.go:122] Using in-cluster configuration\nI0819 14:16:33.641698      95 merged_client_builder.go:122] Using in-cluster configuration\nI0819 14:16:33.659630      95 merged_client_builder.go:122] Using in-cluster configuration\nI0819 14:16:33.660060      95 round_trippers.go:448] GET https://10.0.0.1:443/api/v1/namespaces/kubectl-2030/pods?limit=500\nI0819 14:16:33.660074      95 round_trippers.go:455] Request Headers:\nI0819 14:16:33.660082      95 round_trippers.go:459]     Authorization: Bearer <masked>\nI0819 14:16:33.660089      95 round_trippers.go:459]     Accept: application/json;as=Table;v=v1beta1;g=meta.k8s.io, application/json\nI0819 14:16:33.660095      95 round_trippers.go:459]     User-Agent: kubectl/v1.16.0 (linux/amd64) kubernetes/a5d968b\nI0819 14:16:33.739652      95 round_trippers.go:474] Response Status: 401 Unauthorized in 79 milliseconds\nI0819 14:16:33.740168      95 helpers.go:199] server response object: [{\n  \"kind\": \"Status\",\n  \"apiVersion\": \"v1\",\n  \"metadata\": {},\n  \"status\": \"Failure\",\n  \"message\": \"Unauthorized\",\n  \"reason\": \"Unauthorized\",\n  \"code\": 401\n}]\nF0819 14:16:33.740200      95 helpers.go:114] error: You must be logged in to the server (Unauthorized)\n\nstderr:\n+ /tmp/kubectl get pods '--token=invalid' '--v=7'\ncommand terminated with exit code 255\n\nerror:\nexit status 255",
        },
        Code: 255,
    }
to contain substring
    <string>: Authorization: Bearer invalid
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:721
				
				Click to see stdout/stderrfrom junit_09.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-network] Services should have session affinity work for NodePort service 1m53s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sServices\sshould\shave\ssession\saffinity\swork\sfor\sNodePort\sservice$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1686
Aug 19 14:16:24.709: Connection to 10.138.0.5:30806 timed out or not enough responses.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:1488
				
				Click to see stdout/stderrfrom junit_23.xml

Filter through log files | View test history on testgrid


Test 57m38s

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kubectl-path=../../../../kubernetes_skew/cluster/kubectl.sh --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 1055 Passed Tests

Show 8222 Skipped Tests