This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 56 failed / 194 succeeded
Started2020-01-10 07:42
Elapsed15h6m
Revision
Buildergke-prow-default-pool-cf4891d4-n9hk
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/39b8790e-d4a5-40d2-9261-c8b2656a9e16/targets/test'}}
poda5ed4006-337c-11ea-a4dc-8213d962824c
resultstorehttps://source.cloud.google.com/results/invocations/39b8790e-d4a5-40d2-9261-c8b2656a9e16/targets/test
infra-commit48932aa5d
job-versionv1.15.8-beta.1.28+d3fe1d06a0fb1c
master_os_image
node_os_imagecos-77-12371-89-0
poda5ed4006-337c-11ea-a4dc-8213d962824c
revisionv1.15.8-beta.1.28+d3fe1d06a0fb1c

Test Failures


Kubernetes e2e suite [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] 12m23s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\shave\smonotonically\sincreasing\srestart\scount\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 16:40:15.084: Couldn't delete ns: "container-probe-5354": namespace container-probe-5354 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace container-probe-5354 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID 10m6s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\scontainer\swith\srunAsNonRoot\sshould\srun\swith\san\sexplicit\snon\-root\suser\sID$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 21:50:58.923: Couldn't delete ns: "security-context-test-8034": namespace security-context-test-8034 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace security-context-test-8034 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should be able to deny custom resource creation and deletion 23s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\sbe\sable\sto\sdeny\scustom\sresource\screation\sand\sdeletion$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:139
failed to update custom resource cr-instance-2 in namespace: webhook-7529
Unexpected error:
    <*errors.StatusError | 0xc0002dd860>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/apis/webhook-crd-test.k8s.io/v1/namespaces/webhook-7529/e2e-test-webhook-1340-crds/cr-instance-2\\\": the server could not find the requested resource\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/apis/webhook-crd-test.k8s.io/v1/namespaces/webhook-7529/e2e-test-webhook-1340-crds/cr-instance-2\": the server could not find the requested resource",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/apis/webhook-crd-test.k8s.io/v1/namespaces/webhook-7529/e2e-test-webhook-1340-crds/cr-instance-2\": the server could not find the requested resource") has prevented the request from succeeding
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1454
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil 10m37s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sGarbage\scollector\sshould\sorphan\spods\screated\sby\src\sif\sdeleteOptions\.OrphanDependents\sis\snil$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 22:21:41.876: Couldn't delete ns: "gc-2343": namespace gc-2343 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace gc-2343 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] 10m8s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDeployment\sRollingUpdateDeployment\sshould\sdelete\sold\spods\sand\screate\snew\sones\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 20:09:14.008: Couldn't delete ns: "deployment-7230": namespace deployment-7230 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace deployment-7230 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] Job should delete a job [Conformance] 10m48s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sJob\sshould\sdelete\sa\sjob\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 15:47:03.123: Couldn't delete ns: "job-1878": namespace job-1878 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace job-1878 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] 10m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sReplicationController\sshould\sadopt\smatching\spods\son\screation\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 22:11:04.411: Couldn't delete ns: "replication-controller-3763": namespace replication-controller-3763 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace replication-controller-3763 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] 15m59s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\s\[k8s\.io\]\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sBurst\sscaling\sshould\srun\sto\scompletion\seven\swith\sunhealthy\spods\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:08:46.887: Couldn't delete ns: "statefulset-6587": namespace statefulset-6587 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace statefulset-6587 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-auth] PodSecurityPolicy should forbid pod creation when no PSP is available 36s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-auth\]\sPodSecurityPolicy\sshould\sforbid\spod\screation\swhen\sno\sPSP\sis\savailable$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:79
should be forbidden
Expected an error to have occurred.  Got:
    <nil>: nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2059
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] ReplicationController light Should scale from 2 pods to 1 pod 16m2s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\sReplicationController\slight\sShould\sscale\sfrom\s2\spods\sto\s1\spod$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:82
timeout waiting 15m0s for 1 replicas
Unexpected error:
    <*errors.errorString | 0xc00027d8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 15m59s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\s\[Serial\]\s\[Slow\]\sReplicaSet\sShould\sscale\sfrom\s5\spods\sto\s3\spods\sand\sfrom\s3\sto\s1$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:53
timeout waiting 15m0s for 3 replicas
Unexpected error:
    <*errors.errorString | 0xc00027d8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects NO client request should support a client that connects, sends DATA, and disconnects 10m53s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sPort\sforwarding\s\[k8s\.io\]\sWith\sa\sserver\slistening\son\slocalhost\s\[k8s\.io\]\sthat\sexpects\sNO\sclient\srequest\sshould\ssupport\sa\sclient\sthat\sconnects\,\ssends\sDATA\,\sand\sdisconnects$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 19:59:05.557: Couldn't delete ns: "port-forwarding-4560": namespace port-forwarding-4560 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace port-forwarding-4560 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] 10m8s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\s\[k8s\.io\]\sKubectl\srun\spod\sshould\screate\sa\spod\sfrom\san\simage\swhen\srestart\sis\sNever\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 20:19:22.307: Couldn't delete ns: "kubectl-5280": namespace kubectl-5280 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace kubectl-5280 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover 17m39s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sRestart\s\[Disruptive\]\sshould\srestart\sall\snodes\sand\sensure\sall\snodes\sand\spods\srecover$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/restart.go:86
Jan 10 12:22:41.930: At least one pod wasn't running and ready after the restart.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/restart.go:115
				
				Click to see stdout/stderrfrom junit_01.xml

Find wasnt mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade] 29m32s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sUpgrade\s\[Feature\:Upgrade\]\smaster\supgrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:MasterUpgrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:91
Jan 10 08:16:08.004: timeout waiting 15m0s for 1 replicas
Unexpected error:
    <*errors.errorString | 0xc00009f010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/horizontal_pod_autoscalers.go:85