This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 4 failed / 695 succeeded
Started2020-01-17 07:06
Elapsed31m28s
Revision
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/9b3aea76-86f4-450a-9cad-0ced3dbe7a18/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/9b3aea76-86f4-450a-9cad-0ced3dbe7a18/targets/test
job-versionv1.18.0-alpha.1.848+916edd922e528f
revisionv1.18.0-alpha.1.848+916edd922e528f

Test Failures


Kubernetes e2e suite [sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones 23s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-scheduling\]\sMulti\-AZ\sClusters\sshould\sspread\sthe\spods\sof\sa\sreplication\scontroller\sacross\szones$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:59
Jan 17 07:29:51.014: Pods were not evenly spread across zones.  0 in one zone and 7 in another zone
Expected
    <int>: 0
to be within 1 of ~
    <int>: 7
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:173
				
				Click to see stdout/stderrfrom junit_16.xml

Filter through log files


Kubernetes e2e suite [sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones 4.55s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-scheduling\]\sMulti\-AZ\sClusters\sshould\sspread\sthe\spods\sof\sa\sservice\sacross\szones$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:55
Jan 17 07:21:06.973: Pods were not evenly spread across zones.  0 in one zone and 6 in another zone
Expected
    <int>: 0
to be within 1 of ~
    <int>: 6
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:173
				
				Click to see stdout/stderrfrom junit_21.xml

Filter through log files


Kubernetes e2e suite [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] 1m12s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sDownward\sAPI\svolume\sshould\sprovide\snode\sallocatable\s\(memory\)\sas\sdefault\smemory\slimit\sif\sthe\slimit\sis\snot\sset\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 17 07:24:47.786: Unexpected error:
    <*errors.errorString | 0xc002347870>: {
        s: "expected pod \"downwardapi-volume-354051de-4a54-4422-a641-aba8cdb87989\" success: pod \"downwardapi-volume-354051de-4a54-4422-a641-aba8cdb87989\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-17 07:23:57 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-17 07:23:57 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-17 07:23:57 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-17 07:23:57 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.20.39.199 PodIP:100.96.3.2 PodIPs:[{IP:100.96.3.2}] StartTime:2020-01-17 07:23:57 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:client-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:ContainerCannotRun,Message:OCI runtime start failed: container process is already dead: unknown,StartedAt:2020-01-17 07:24:17 +0000 UTC,FinishedAt:2020-01-17 07:24:17 +0000 UTC,ContainerID:docker://3920dad87888466f61aaf25fba130e815a66f725819bbeec566c6c5d8e7e2c2b,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:docker://3920dad87888466f61aaf25fba130e815a66f725819bbeec566c6c5d8e7e2c2b Started:0xc002798b9a}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
    }
    expected pod "downwardapi-volume-354051de-4a54-4422-a641-aba8cdb87989" success: pod "downwardapi-volume-354051de-4a54-4422-a641-aba8cdb87989" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-17 07:23:57 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-17 07:23:57 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-17 07:23:57 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-17 07:23:57 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.20.39.199 PodIP:100.96.3.2 PodIPs:[{IP:100.96.3.2}] StartTime:2020-01-17 07:23:57 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:client-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:ContainerCannotRun,Message:OCI runtime start failed: container process is already dead: unknown,StartedAt:2020-01-17 07:24:17 +0000 UTC,FinishedAt:2020-01-17 07:24:17 +0000 UTC,ContainerID:docker://3920dad87888466f61aaf25fba130e815a66f725819bbeec566c6c5d8e7e2c2b,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:docker://3920dad87888466f61aaf25fba130e815a66f725819bbeec566c6c5d8e7e2c2b Started:0xc002798b9a}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:829
				
				Click to see stdout/stderrfrom junit_04.xml

Find downwardapi-volume-354051de-4a54-4422-a641-aba8cdb87989 mentions in log files


Test 20m10s

error during platforms/linux/amd64/ginkgo --nodes=30 platforms/linux/amd64/e2e.test -- --kubeconfig=/tmp/kops497304722/kubeconfig --ginkgo.flakeAttempts=1 --provider=aws --gce-zone=us-west-2a --gce-region=us-west-2 --gce-multizone=false --host=https://api-e2e-kops-aws-ha-uswes-l3e4kr-1495990122.us-west-2.elb.amazonaws.com --cluster-tag=e2e-kops-aws-ha-uswest2.k8s.local --repo-root=. --num-nodes=0 --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort --report-dir=/logs/artifacts --disable-log-dump=true: exit status 1
				from junit_runner.xml

Filter through log files


Show 695 Passed Tests

Show 4158 Skipped Tests