This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 5 failed / 1053 succeeded
Started2019-08-18 06:02
Elapsed15h13m
Revision
Buildergke-prow-ssd-pool-1a225945-66t6
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/b3f567a3-f8b5-4ffd-a86a-a2a79c4f46b3/targets/test'}}
pod9d6cf2fa-c17d-11e9-be5c-ee22131cc068
resultstorehttps://source.cloud.google.com/results/invocations/b3f567a3-f8b5-4ffd-a86a-a2a79c4f46b3/targets/test
infra-commitb111600a7
job-versionv1.15.3-beta.0.70+2d3c76f9091b6b
master_os_imagecos-73-11647-163-0
node_os_imagecos-73-11647-163-0
pod9d6cf2fa-c17d-11e9-be5c-ee22131cc068
revisionv1.15.3-beta.0.70+2d3c76f9091b6b

Test Failures


Kubernetes e2e suite AfterSuite 0.00s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\sAfterSuite$'
_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:162
Aug 18 21:02:28.472: Couldn't delete ns: "services-4362": Operation cannot be fulfilled on namespaces "services-4362": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system. (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"Operation cannot be fulfilled on namespaces \"services-4362\": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.", Reason:"Conflict", Details:(*v1.StatusDetails)(0xc0025114a0), Code:409}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				from junit_04.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client [k8s.io] Simple pod should handle in-cluster config 47s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\s\[k8s\.io\]\sSimple\spod\sshould\shandle\sin\-cluster\sconfig$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:621
Expected
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{../../../../kubernetes_skew/cluster/kubectl.sh [../../../../kubernetes_skew/cluster/kubectl.sh --server=https://35.233.178.167 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-1893 nginx -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1] []  <nil> I0818 06:56:40.356514      94 merged_client_builder.go:164] Using in-cluster namespace\nI0818 06:56:40.356771      94 merged_client_builder.go:122] Using in-cluster configuration\nI0818 06:56:40.433049      94 merged_client_builder.go:122] Using in-cluster configuration\nI0818 06:56:40.507240      94 merged_client_builder.go:122] Using in-cluster configuration\nI0818 06:56:40.507628      94 round_trippers.go:448] GET https://10.0.0.1:443/api/v1/namespaces/kubectl-1893/pods?limit=500\nI0818 06:56:40.507641      94 round_trippers.go:455] Request Headers:\nI0818 06:56:40.507648      94 round_trippers.go:459]     Accept: application/json;as=Table;v=v1beta1;g=meta.k8s.io, application/json\nI0818 06:56:40.507655      94 round_trippers.go:459]     User-Agent: kubectl/v1.16.0 (linux/amd64) kubernetes/f142fb7\nI0818 06:56:40.507664      94 round_trippers.go:459]     Authorization: Bearer <masked>\nI0818 06:56:40.847641      94 round_trippers.go:474] Response Status: 401 Unauthorized in 339 milliseconds\nI0818 06:56:40.848517      94 helpers.go:199] server response object: [{\n  \"kind\": \"Status\",\n  \"apiVersion\": \"v1\",\n  \"metadata\": {},\n  \"status\": \"Failure\",\n  \"message\": \"Unauthorized\",\n  \"reason\": \"Unauthorized\",\n  \"code\": 401\n}]\nF0818 06:56:40.848545      94 helpers.go:114] error: You must be logged in to the server (Unauthorized)\n + /tmp/kubectl get pods '--token=invalid' '--v=7'\ncommand terminated with exit code 255\n [] <nil> 0xc00184bc80 exit status 255 <nil> <nil> true [0xc001c5d620 0xc001c5d638 0xc001c5d650] [0xc001c5d620 0xc001c5d638 0xc001c5d650] [0xc001c5d630 0xc001c5d648] [0x9d21f0 0x9d21f0] 0xc001fbb6e0 <nil>}:\nCommand stdout:\nI0818 06:56:40.356514      94 merged_client_builder.go:164] Using in-cluster namespace\nI0818 06:56:40.356771      94 merged_client_builder.go:122] Using in-cluster configuration\nI0818 06:56:40.433049      94 merged_client_builder.go:122] Using in-cluster configuration\nI0818 06:56:40.507240      94 merged_client_builder.go:122] Using in-cluster configuration\nI0818 06:56:40.507628      94 round_trippers.go:448] GET https://10.0.0.1:443/api/v1/namespaces/kubectl-1893/pods?limit=500\nI0818 06:56:40.507641      94 round_trippers.go:455] Request Headers:\nI0818 06:56:40.507648      94 round_trippers.go:459]     Accept: application/json;as=Table;v=v1beta1;g=meta.k8s.io, application/json\nI0818 06:56:40.507655      94 round_trippers.go:459]     User-Agent: kubectl/v1.16.0 (linux/amd64) kubernetes/f142fb7\nI0818 06:56:40.507664      94 round_trippers.go:459]     Authorization: Bearer <masked>\nI0818 06:56:40.847641      94 round_trippers.go:474] Response Status: 401 Unauthorized in 339 milliseconds\nI0818 06:56:40.848517      94 helpers.go:199] server response object: [{\n  \"kind\": \"Status\",\n  \"apiVersion\": \"v1\",\n  \"metadata\": {},\n  \"status\": \"Failure\",\n  \"message\": \"Unauthorized\",\n  \"reason\": \"Unauthorized\",\n  \"code\": 401\n}]\nF0818 06:56:40.848545      94 helpers.go:114] error: You must be logged in to the server (Unauthorized)\n\nstderr:\n+ /tmp/kubectl get pods '--token=invalid' '--v=7'\ncommand terminated with exit code 255\n\nerror:\nexit status 255",
        },
        Code: 255,
    }
to contain substring
    <string>: Authorization: Bearer invalid
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:721
				
				Click to see stdout/stderrfrom junit_24.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly] 6m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\snfs\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\ssubPath\sshould\sfail\sif\ssubpath\sfile\sis\soutside\sthe\svolume\s\[Slow\]\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:232
Unexpected error:
    <*errors.errorString | 0xc001195620>: {
        s: "PersistentVolumeClaims [pvc-62z5q] not all in phase Bound within 5m0s",
    }
    PersistentVolumeClaims [pvc-62z5q] not all in phase Bound within 5m0s
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:346
				
				Click to see stdout/stderrfrom junit_06.xml

Filter through log files | View test history on testgrid


Test 14h31m

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kubectl-path=../../../../kubernetes_skew/cluster/kubectl.sh --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true (interrupted): exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Timeout 15h0m

kubetest --timeout triggered
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 1053 Passed Tests

Show 8222 Skipped Tests