This job view page is being replaced by Spyglass soon. Check out the new job view.
PRp0lyn0mial: KCM: specifies the upper-bound timeout limit for outgoing requests
ResultFAILURE
Tests 20 failed / 645 succeeded
Started2021-02-23 12:13
Elapsed38m34s
Revision662cc70c70a0f2b269188b9b2192eeee0e1a2ab4
Refs 99358

Test Failures


Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] 5m2s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sInitContainer\s\[NodeConformance\]\sshould\snot\sstart\sapp\scontainers\sif\sinit\scontainers\sfail\son\sa\sRestartAlways\spod\s\[Conformance\]$'
test/e2e/framework/framework.go:640
Feb 23 12:38:10.302: Unexpected error:
    <*errors.errorString | 0xc00023e240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
test/e2e/common/init_container.go:432
				
				Click to see stdout/stderrfrom junit_12.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Pods Extended [k8s.io] Pod Container lifecycle should not create extra sandbox if all containers are done 5m2s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sPods\sExtended\s\[k8s\.io\]\sPod\sContainer\slifecycle\sshould\snot\screate\sextra\ssandbox\sif\sall\scontainers\sare\sdone$'
test/e2e/node/pods.go:450
Feb 23 12:41:22.764: Unexpected error:
    <*errors.errorString | 0xc002b217d0>: {
        s: "Gave up after waiting 5m0s for pod \"pod-always-succeed2f8ca586-b802-43f9-8649-cd9b23b423ef\" to be \"Succeeded or Failed\"",
    }
    Gave up after waiting 5m0s for pod "pod-always-succeed2f8ca586-b802-43f9-8649-cd9b23b423ef" to be "Succeeded or Failed"
occurred
test/e2e/node/pods.go:489
				
				Click to see stdout/stderrfrom junit_05.xml

Find pod-always-succeed2f8ca586-b802-43f9-8649-cd9b23b423ef mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] kubelet [k8s.io] [sig-node] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. 9m20s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\skubelet\s\[k8s\.io\]\s\[sig\-node\]\sClean\sup\spods\son\snode\skubelet\sshould\sbe\sable\sto\sdelete\s10\spods\sper\snode\sin\s1m0s\.$'
test/e2e/node/kubelet.go:341
Feb 23 12:42:10.941: Unexpected error:
    <*errors.errorString | 0xc00298c170>: {
        s: "Only 18 pods started out of 20",
    }
    Only 18 pods started out of 20
occurred
test/e2e/node/kubelet.go:354
				
				Click to see stdout/stderrfrom junit_17.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] 5m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-storage\]\sDownward\sAPI\svolume\sshould\supdate\slabels\son\smodification\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:640
Feb 23 12:42:26.725: Unexpected error:
    <*errors.errorString | 0xc00023e240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
test/e2e/framework/pods.go:103
				
				Click to see stdout/stderrfrom junit_22.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] 5m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-storage\]\sProjected\sdownwardAPI\sshould\sset\smode\son\sitem\sfile\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:640
Feb 23 12:41:19.674: Unexpected error:
    <*errors.errorString | 0xc002f7ef10>: {
        s: "expected pod \"downwardapi-volume-6fb9e346-fc18-4f12-89ca-e576571a9a54\" success: Gave up after waiting 5m0s for pod \"downwardapi-volume-6fb9e346-fc18-4f12-89ca-e576571a9a54\" to be \"Succeeded or Failed\"",
    }
    expected pod "downwardapi-volume-6fb9e346-fc18-4f12-89ca-e576571a9a54" success: Gave up after waiting 5m0s for pod "downwardapi-volume-6fb9e346-fc18-4f12-89ca-e576571a9a54" to be "Succeeded or Failed"
occurred
test/e2e/framework/util.go:742
				
				Click to see stdout/stderrfrom junit_02.xml

Find downwardapi-volume-6fb9e346-fc18-4f12-89ca-e576571a9a54 mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] 5m5s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-storage\]\sSecrets\sshould\sbe\sconsumable\sin\smultiple\svolumes\sin\sa\spod\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:640
Feb 23 12:36:02.935: Unexpected error:
    <*errors.errorString | 0xc002e6cff0>: {
        s: "expected pod \"pod-secrets-7c56194d-8115-4b21-9902-f9007d97062a\" success: Gave up after waiting 5m0s for pod \"pod-secrets-7c56194d-8115-4b21-9902-f9007d97062a\" to be \"Succeeded or Failed\"",
    }
    expected pod "pod-secrets-7c56194d-8115-4b21-9902-f9007d97062a" success: Gave up after waiting 5m0s for pod "pod-secrets-7c56194d-8115-4b21-9902-f9007d97062a" to be "Succeeded or Failed"
occurred
test/e2e/framework/util.go:742
				
				Click to see stdout/stderrfrom junit_13.xml

Find pod-secrets-7c56194d-8115-4b21-9902-f9007d97062a mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] 5m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\s\[Privileged\:ClusterAdmin\]\sshould\shonor\stimeout\s\[Conformance\]$'
test/e2e/apimachinery/webhook.go:86
Feb 23 12:42:29.607: waiting for the deployment status valid%!(EXTRA string=k8s.gcr.io/e2e-test-images/agnhost:2.28, string=sample-webhook-deployment, string=webhook-7922)
Unexpected error:
    <*errors.errorString | 0xc002fb8680>: {
        s: "error waiting for deployment \"sample-webhook-deployment\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749680647, loc:(*time.Location)(0x78e8d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749680647, loc:(*time.Location)(0x78e8d40)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749680647, loc:(*time.Location)(0x78e8d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749680647, loc:(*time.Location)(0x78e8d40)}}, Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"sample-webhook-deployment-8977db\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}",
    }
    error waiting for deployment "sample-webhook-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749680647, loc:(*time.Location)(0x78e8d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749680647, loc:(*time.Location)(0x78e8d40)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749680647, loc:(*time.Location)(0x78e8d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749680647, loc:(*time.Location)(0x78e8d40)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)}
occurred
test/e2e/apimachinery/webhook.go:845
				
				Click to see stdout/stderrfrom junit_25.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] 5m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sReplicationController\sshould\sadopt\smatching\spods\son\screation\s\[Conformance\]$'
test/e2e/framework/framework.go:640
Feb 23 12:39:16.317: Unexpected error:
    <*errors.errorString | 0xc00023e240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
test/e2e/framework/pods.go:103
				
				Click to see stdout/stderrfrom junit_08.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] 1m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sProxy\sversion\sv1\sA\sset\sof\svalid\sresponses\sare\sreturned\sfor\sboth\spod\sand\sservice\sProxyWithPath\s\[Conformance\]$'
test/e2e/framework/framework.go:640
Feb 23 12:40:25.461: Pod didn't start within time out period
Unexpected error:
    <*errors.errorString | 0xc000238230>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
test/e2e/network/proxy.go:313
				
				Click to see stdout/stderrfrom junit_20.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-network] Services should be able to create a functioning NodePort service [Conformance] 5m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sServices\sshould\sbe\sable\sto\screate\sa\sfunctioning\sNodePort\sservice\s\[Conformance\]$'
test/e2e/framework/framework.go:640
Feb 23 12:36:00.827: Unexpected error:
    <*errors.errorString | 0xc001150ce0>: {
        s: "Only 0 pods started out of 2",
    }
    Only 0 pods started out of 2
occurred
test/e2e/network/service.go:1158
				
				Click to see stdout/stderrfrom junit_24.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-network] Services should be able to up and down services 12m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sServices\sshould\sbe\sable\sto\sup\sand\sdown\sservices$'
test/e2e/network/service.go:1007
Feb 23 12:38:57.088: failed to create new exec pod in namespace: services-9243
Unexpected error:
    <*errors.errorString | 0xc00023e240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
test/e2e/framework/pod/resource.go:483