This job view page is being replaced by Spyglass soon. Check out the new job view.
PRtedyu: Don't try to create VolumeSpec immediately after underlying PVC is being deleted
ResultFAILURE
Tests 3 failed / 757 succeeded
Started2020-01-14 17:23
Elapsed55m6s
Revision
Buildergke-prow-default-pool-cf4891d4-sql7
Refs master:c9003a26
86670:30a96d8c
pod6b77005b-36f2-11ea-a433-1606bbf4c6af
infra-commit2bc048569
job-versionv1.18.0-alpha.1.679+f592fe0b998a77
master_os_imagecos-77-12371-89-0
node_os_imagecos-77-12371-89-0
pod6b77005b-36f2-11ea-a433-1606bbf4c6af
repok8s.io/kubernetes
repo-commitf592fe0b998a776773fcec3684dd88ba5f271c9d
repos{u'k8s.io/kubernetes': u'master:c9003a268dff6700506929b247847341ea4d5b33,86670:30a96d8cf6aec7a2ca117d8037bb0e65aaec5750', u'k8s.io/release': u'master'}
revisionv1.18.0-alpha.1.679+f592fe0b998a77

Test Failures


Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] 1m29s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sGarbage\scollector\sshould\sdelete\sRS\screated\sby\sdeployment\swhen\snot\sorphaning\s\[Conformance\]$'
test/e2e/framework/framework.go:685
Jan 14 17:46:30.614: Failed to wait for all rs to be garbage collected: [timed out waiting for the condition, remaining rs are: &v1.ReplicaSetList{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"/apis/apps/v1/namespaces/gc-4317/replicasets", ResourceVersion:"14798", Continue:"", RemainingItemCount:(*int64)(nil)}, Items:[]v1.ReplicaSet(nil)}]
test/e2e/apimachinery/garbage_collector.go:533
				
				Click to see stdout/stderrfrom junit_30.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] 1m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sProjected\sdownwardAPI\sshould\sprovide\spodname\sas\snon\-root\swith\sfsgroup\s\[LinuxOnly\]\s\[NodeFeature\:FSGroup\]$'
test/e2e/common/projected_downwardapi.go:90
Jan 14 17:58:15.486: Unexpected error:
    <*errors.errorString | 0xc002227700>: {
        s: "expected pod \"metadata-volume-d770056d-0b11-4ab9-a0bc-3f82f8215a98\" success: pod \"metadata-volume-d770056d-0b11-4ab9-a0bc-3f82f8215a98\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-14 17:57:26 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-14 17:57:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-14 17:57:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-14 17:57:26 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.40.0.5 PodIP:10.64.1.92 PodIPs:[{IP:10.64.1.92}] StartTime:2020-01-14 17:57:26 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:client-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:ContainerCannotRun,Message:OCI runtime start failed: container process is already dead: unknown,StartedAt:2020-01-14 17:57:45 +0000 UTC,FinishedAt:2020-01-14 17:57:45 +0000 UTC,ContainerID:docker://b491784802897ebf3b867491e684d74c17214b07b5492c81941c1d74fa329db0,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:docker://b491784802897ebf3b867491e684d74c17214b07b5492c81941c1d74fa329db0 Started:0xc001ca5ee9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
    }
    expected pod "metadata-volume-d770056d-0b11-4ab9-a0bc-3f82f8215a98" success: pod "metadata-volume-d770056d-0b11-4ab9-a0bc-3f82f8215a98" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-14 17:57:26 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-14 17:57:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-14 17:57:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-14 17:57:26 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.40.0.5 PodIP:10.64.1.92 PodIPs:[{IP:10.64.1.92}] StartTime:2020-01-14 17:57:26 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:client-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:ContainerCannotRun,Message:OCI runtime start failed: container process is already dead: unknown,StartedAt:2020-01-14 17:57:45 +0000 UTC,FinishedAt:2020-01-14 17:57:45 +0000 UTC,ContainerID:docker://b491784802897ebf3b867491e684d74c17214b07b5492c81941c1d74fa329db0,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:docker://b491784802897ebf3b867491e684d74c17214b07b5492c81941c1d74fa329db0 Started:0xc001ca5ee9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
occurred
test/e2e/framework/util.go:829
				
				Click to see stdout/stderrfrom junit_17.xml

Find metadata-volume-d770056d-0b11-4ab9-a0bc-3f82f8215a98 mentions in log files | View test history on testgrid


Test 32m57s

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 757 Passed Tests

Show 4099 Skipped Tests