This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 79 failed / 643 succeeded
Started2019-03-20 07:16
Elapsed1h13m
Revision
Buildergke-prow-containerd-pool-99179761-b4dm
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/e6709230-b521-481f-931e-4a2cdd6ec833/targets/test'}}
pode7dcb7ed-4adf-11e9-ab9f-0a580a6c0a8e
resultstorehttps://source.cloud.google.com/results/invocations/e6709230-b521-481f-931e-4a2cdd6ec833/targets/test
infra-commit3931105de
job-versionv1.15.0-alpha.0.1313+6f9bf5fe98bcc3
pode7dcb7ed-4adf-11e9-ab9f-0a580a6c0a8e
revisionv1.15.0-alpha.0.1313+6f9bf5fe98bcc3

Test Failures


Test 1h2m

error during ./hack/ginkgo-e2e.sh --ginkgo.flakeAttempts=2 --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


[k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] 7m11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Probing\scontainer\sshould\s\*not\*\sbe\srestarted\swith\sa\sexec\s\"cat\s\/tmp\/health\"\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Mar 20 07:59:05.412: All nodes should be ready after test, Not ready nodes: ", ip-172-20-44-39.ca-central-1.compute.internal"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:395
				
				Click to see stdout/stderrfrom junit_07.xml

Filter through log files | View test history on testgrid


[k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] 3m9s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Variable\sExpansion\sshould\sallow\scomposing\senv\svars\sinto\snew\senv\svars\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Mar 20 07:41:39.150: All nodes should be ready after test, Not ready nodes: ", ip-172-20-44-39.ca-central-1.compute.internal"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:395
				
				Click to see stdout/stderrfrom junit_18.xml

Filter through log files | View test history on testgrid


[k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] 3m9s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Variable\sExpansion\sshould\sallow\scomposing\senv\svars\sinto\snew\senv\svars\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Mar 20 07:44:48.913: All nodes should be ready after test, Not ready nodes: ", ip-172-20-44-39.ca-central-1.compute.internal"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:395
				
				Click to see stdout/stderrfrom junit_18.xml

Filter through log files | View test history on testgrid


[k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] 10m10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Variable\sExpansion\sshould\sallow\ssubstituting\svalues\sin\sa\scontainer\'s\sargs\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
wait for pod "var-expansion-c6834525-4ae2-11e9-bfba-a6f36cb5aba4" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc00032a460>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:175
				
				Click to see stdout/stderrfrom junit_11.xml

Find var-expansion-c6834525-4ae2-11e9-bfba-a6f36cb5aba4 mentions in log files | View test history on testgrid


[k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] 3m9s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-node\]\sSecurity\sContext\sshould\ssupport\spod\.Spec\.SecurityContext\.RunAsUser\s\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Mar 20 08:02:48.315: All nodes should be ready after test, Not ready nodes: ", ip-172-20-44-39.ca-central-1.compute.internal"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:395
				
				Click to see stdout/stderrfrom junit_06.xml

Filter through log files | View test history on testgrid


[k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] 3m9s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-node\]\sSecurity\sContext\sshould\ssupport\spod\.Spec\.SecurityContext\.RunAsUser\s\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Mar 20 07:59:38.585: All nodes should be ready after test, Not ready nodes: ", ip-172-20-44-39.ca-central-1.compute.internal"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:395
				
				Click to see stdout/stderrfrom junit_06.xml

Filter through log files | View test history on testgrid


[k8s.io] [sig-node] kubelet [k8s.io] [sig-node] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. 18m59s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-node\]\skubelet\s\[k8s\.io\]\s\[sig\-node\]\sClean\sup\spods\son\snode\skubelet\sshould\sbe\sable\sto\sdelete\s10\spods\sper\snode\sin\s1m0s\.$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:312
Unexpected error:
    <*errors.errorString | 0xc0002772f0>: {
        s: "Only 34 pods started out of 40",
    }
    Only 34 pods started out of 40
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:325
				
				Click to see stdout/stderrfrom junit_08.xml

Filter through log files | View test history on testgrid


[k8s.io] [sig-node] kubelet [k8s.io] [sig-node] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. 20m23s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-node\]\skubelet\s\[k8s\.io\]\s\[sig\-node\]\sClean\sup\spods\son\snode\skubelet\sshould\sbe\sable\sto\sdelete\s10\spods\sper\snode\sin\s1m0s\.$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:312
Unexpected error:
    <*errors.errorString | 0xc0018282c0>: {
        s: "Only 31 pods started out of 40",
    }
    Only 31 pods started out of 40
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:325
				
				Click to see stdout/stderrfrom junit_08.xml

Filter through log files | View test history on testgrid


[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] 9m25s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-api\-machinery\]\sAggregator\sShould\sbe\sable\sto\ssupport\sthe\s1\.10\sSample\sAPI\sServer\susing\sthe\scurrent\sAggregator\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
deploying extension apiserver in namespace aggregator-6893
Unexpected error:
    <*errors.errorString | 0xc002a36e80>: {
        s: "error waiting for deployment \"sample-apiserver-deployment\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63688664178, loc:(*time.Location)(0x89c5f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688664178, loc:(*time.Location)(0x89c5f20)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63688664178, loc:(*time.Location)(0x89c5f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688664178, loc:(*time.Location)(0x89c5f20)}}, Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"sample-apiserver-deployment-65db6755fc\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}",
    }
    error waiting for deployment "sample-apiserver-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63688664178, loc:(*time.Location)(0x89c5f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688664178, loc:(*time.Location)(0x89c5f20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63688664178, loc:(*time.Location)(0x89c5f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688664178, loc:(*time.Location)(0x89c5f20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-65db6755fc\" is progressing."}}, CollisionCount:(*int32)(nil)}
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:308
				
				Click to see stdout/stderrfrom junit_03.xml

Filter through log files | View test history on testgrid


[sig-apps] CronJob should remove from active list jobs that have been deleted 4m2s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-apps\]\sCronJob\sshould\sremove\sfrom\sactive\slist\sjobs\sthat\shave\sbeen\sdeleted$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Mar 20 07:51:47.253: All nodes should be ready after test, Not ready nodes: ", ip-172-20-44-39.ca-central-1.compute.internal"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:395
				
				Click to see stdout/stderrfrom junit_16.xml

Filter through log files | View test history on testgrid


[sig-apps] CronJob should remove from active list jobs that have been deleted 4m8s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-apps\]\sCronJob\sshould\sremove\sfrom\sactive\slist\sjobs\sthat\shave\sbeen\sdeleted$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Mar 20 07:55:55.955: All nodes should be ready after test, Not ready nodes: ", ip-172-20-44-39.ca-central-1.compute.internal"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:395
				
				Click to see stdout/stderrfrom junit_16.xml

Filter through log files | View test history on testgrid


[sig-apps] CronJob should schedule multiple jobs concurrently 5m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-apps\]\sCronJob\sshould\sschedule\smultiple\sjobs\sconcurrently$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Mar 20 07:57:06.900: All nodes should be ready after test, Not ready nodes: ", ip-172-20-44-39.ca-central-1.compute.internal"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:395