This job view page is being replaced by Spyglass soon. Check out the new job view.
PRnckturner: Fix ECR provider startup latency
ResultFAILURE
Tests 42 failed / 623 succeeded
Started2021-02-22 21:49
Elapsed41m23s
Revision4ddbae65f7230d2dcb8b4a885d7f161b75385feb
Refs 93260

Test Failures


Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] 5m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sKubelet\swhen\sscheduling\sa\sbusybox\scommand\sthat\salways\sfails\sin\sa\spod\sshould\shave\san\sterminated\sreason\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:640
Feb 22 22:17:09.568: Timed out after 300.000s.
Expected
    <*errors.errorString | 0xc001e67e70>: {
        s: "expected state to be terminated. Got pod status: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-22 22:12:09 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-22 22:12:09 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [bin-falsec6462680-6ae0-43c0-a060-bd85b36e776a]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-22 22:12:09 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [bin-falsec6462680-6ae0-43c0-a060-bd85b36e776a]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-22 22:12:09 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.18.0.2 PodIP: PodIPs:[] StartTime:2021-02-22 22:12:09 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:bin-falsec6462680-6ae0-43c0-a060-bd85b36e776a State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/e2e-test-images/busybox:1.29 ImageID: ContainerID: Started:0xc00329697a}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
    }
to be nil
test/e2e/common/kubelet.go:124
				
				Click to see stdout/stderrfrom junit_10.xml

Find status mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] 5m2s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\s\*not\*\sbe\srestarted\swith\sa\stcp\:8080\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:640
Feb 22 22:11:00.016: starting pod liveness-f8a3c457-9f68-41ee-955e-dd918cc43603 in namespace container-probe-768
Unexpected error:
    <*errors.errorString | 0xc0002be240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
test/e2e/common/container_probe.go:592
				
				Click to see stdout/stderrfrom junit_05.xml

Find liveness-f8a3c457-9f68-41ee-955e-dd918cc43603 mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] 5m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\shave\smonotonically\sincreasing\srestart\scount\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:640
Feb 22 22:10:55.553: starting pod liveness-e31bbb52-54e2-4f6b-acf1-39dc297a6222 in namespace container-probe-4402
Unexpected error:
    <*errors.errorString | 0xc000238230>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
test/e2e/common/container_probe.go:592
				
				Click to see stdout/stderrfrom junit_15.xml

Find liveness-e31bbb52-54e2-4f6b-acf1-39dc297a6222 mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly] 5m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\scontainer\swith\srunAsNonRoot\sshould\srun\swith\san\sexplicit\snon\-root\suser\sID\s\[LinuxOnly\]$'
test/e2e/common/security_context.go:124
Feb 22 22:16:01.558: wait for pod "explicit-nonroot-uid" to succeed
Expected success, but got an error:
    <*errors.errorString | 0xc002d46cb0>: {
        s: "Gave up after waiting 5m0s for pod \"explicit-nonroot-uid\" to be \"Succeeded or Failed\"",
    }
    Gave up after waiting 5m0s for pod "explicit-nonroot-uid" to be "Succeeded or Failed"
test/e2e/framework/pods.go:212
				
				Click to see stdout/stderrfrom junit_16.xml

Find explicit-nonroot-uid mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] 5m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\swhen\screating\scontainers\swith\sAllowPrivilegeEscalation\sshould\sallow\sprivilege\sescalation\swhen\snot\sexplicitly\sset\sand\suid\s\!\=\s0\s\[LinuxOnly\]\s\[NodeConformance\]$'
test/e2e/common/security_context.go:330
Feb 22 22:14:20.137: wait for pod "alpine-nnp-nil-52291f11-a5a8-478f-8528-cf1cfddc4836" to succeed
Expected success, but got an error:
    <*errors.errorString | 0xc002aa9170>: {
        s: "Gave up after waiting 5m0s for pod \"alpine-nnp-nil-52291f11-a5a8-478f-8528-cf1cfddc4836\" to be \"Succeeded or Failed\"",
    }
    Gave up after waiting 5m0s for pod "alpine-nnp-nil-52291f11-a5a8-478f-8528-cf1cfddc4836" to be "Succeeded or Failed"
test/e2e/framework/pods.go:212
				
				Click to see stdout/stderrfrom junit_23.xml

Find alpine-nnp-nil-52291f11-a5a8-478f-8528-cf1cfddc4836 mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] 5m5s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sVariable\sExpansion\sshould\sallow\ssubstituting\svalues\sin\sa\scontainer\'s\scommand\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:640
Feb 22 22:10:51.951: Unexpected error:
    <*errors.errorString | 0xc000574dd0>: {
        s: "expected pod \"var-expansion-fc56c104-d778-467f-8c93-937a6c4cddec\" success: Gave up after waiting 5m0s for pod \"var-expansion-fc56c104-d778-467f-8c93-937a6c4cddec\" to be \"Succeeded or Failed\"",
    }
    expected pod "var-expansion-fc56c104-d778-467f-8c93-937a6c4cddec" success: Gave up after waiting 5m0s for pod "var-expansion-fc56c104-d778-467f-8c93-937a6c4cddec" to be "Succeeded or Failed"
occurred
test/e2e/framework/util.go:742
				
				Click to see stdout/stderrfrom junit_13.xml

Find var-expansion-fc56c104-d778-467f-8c93-937a6c4cddec mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] 5m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\sintra\-pod\scommunication\:\sudp\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:640
Feb 22 22:10:49.150: Unexpected error:
    <*errors.errorString | 0xc0002b0230>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
test/e2e/framework/network/utils.go:829
				
				Click to see stdout/stderrfrom junit_19.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] kubelet [k8s.io] [sig-node] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. 13m41s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\skubelet\s\[k8s\.io\]\s\[sig\-node\]\sClean\sup\spods\son\snode\skubelet\sshould\sbe\sable\sto\sdelete\s10\spods\sper\snode\sin\s1m0s\.$'
test/e2e/node/kubelet.go:341
Feb 22 22:19:28.148: Unexpected error:
    <*errors.errorString | 0xc0020f3d70>: {
        s: "Only 17 pods started out of 20",
    }
    Only 17 pods started out of 20
occurred
test/e2e/node/kubelet.go:354
				
				Click to see stdout/stderrfrom junit_21.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] 5m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-storage\]\sConfigMap\supdates\sshould\sbe\sreflected\sin\svolume\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:640
Feb 22 22:15:54.877: Unexpected error:
    <*errors.errorString | 0xc0002b0230>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
test/e2e/framework/pods.go:103
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] 5m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-storage\]\sProjected\sconfigMap\soptional\supdates\sshould\sbe\sreflected\sin\svolume\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:640
Feb 22 22:10:50.424: Unexpected error:
    <*errors.errorString | 0xc00023e240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
test/e2e/framework/pods.go:103
				
				Click to see stdout/stderrfrom junit_16.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] 5m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-storage\]\sProjected\sconfigMap\sshould\sbe\sconsumable\sfrom\spods\sin\svolume\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:640
Feb 22 22:11:08.820: Unexpected error:
    <*errors.errorString | 0xc002e364e0>: {
        s: "expected pod \"pod-projected-configmaps-7548d668-b25f-4587-ae6f-038ab4adebfd\" success: Gave up after waiting 5m0s for pod \"pod-projected-configmaps-7548d668-b25f-4587-ae6f-038ab4adebfd\" to be \"Succeeded or Failed\"",
    }
    expected pod "pod-projected-configmaps-7548d668-b25f-4587-ae6f-038ab4adebfd" success: Gave up after waiting 5m0s for pod "pod-projected-configmaps-7548d668-b25f-4587-ae6f-038ab4adebfd" to be "Succeeded or Failed"
occurred
test/e2e/framework/util.go:742
				
				Click to see stdout/stderrfrom junit_11.xml

Find pod-projected-configmaps-7548d668-b25f-4587-ae6f-038ab4adebfd mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] 5m6s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-storage\]\sProjected\sdownwardAPI\sshould\sprovide\scontainer\'s\smemory\slimit\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:640
Feb 22 22:11:21.705: Unexpected error:
    <*errors.errorString | 0xc002ead520>: {
        s: "expected pod \"downwardapi-volume-d1bf95d0-4c32-4b8d-9d1a-ab7343343d80\" success: Gave up after waiting 5m0s for pod \"downwardapi-volume-d1bf95d0-4c32-4b8d-9d1a-ab7343343d80\" to be \"Succeeded or Failed\"",
    }
    expected pod "downwardapi-volume-d1bf95d0-4c32-4b8d-9d1a-ab7343343d80" success: Gave up after waiting 5m0s for pod "downwardapi-volume-d1bf95d0-4c32-4b8d-9d1a-ab7343343d80" to be "Succeeded or Failed"
occurred
test/e2e/framework/util.go:742
				
				Click to see stdout/stderrfrom junit_08.xml

Find downwardapi-volume-d1bf95d0-4c32-4b8d-9d1a-ab7343343d80 mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] 5m8s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\s\[Privileged\:ClusterAdmin\]\sshould\sbe\sable\sto\sdeny\sattaching\spod\s\[Conformance\]$'
test/e2e/apimachinery/webhook.go:86
Feb 22 22:15:47.379: waiting for the deployment status valid%!(EXTRA string=k8s.gcr.io/e2e-test-images/agnhost:2.28, string=sample-webhook-deployment, string=webhook-2468)
Unexpected error:
    <*errors.errorString | 0xc001e32590>: {
        s: "error waiting for deployment \"sample-webhook-deployment\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749628645, loc:(*time.Location)(0x78e9d80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749628645, loc:(*time.Location)(0x78e9d80)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749628645, loc:(*time.Location)(0x78e9d80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749628645, loc:(*time.Location)(0x78e9d80)}}, Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"sample-webhook-deployment-8977db\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}",
    }
    error waiting for deployment "sample-webhook-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749628645, loc:(*time.Location)(0x78e9d80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749628645, loc:(*time.Location)(0x78e9d80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749628645, loc:(*time.Location)(0x78e9d80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749628645, loc:(*time.Location)(0x78e9d80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)}
occurred
test/e2e/apimachinery/webhook.go:845