This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2020-01-20 12:07
Elapsed39m43s
Revision
Buildergke-prow-default-pool-cf4891d4-dklz
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/271af8d8-8521-48da-9622-4b886b3b212e/targets/test'}}
pod5ceb27e9-3b7d-11ea-b4a5-fe173d511a50
resultstorehttps://source.cloud.google.com/results/invocations/271af8d8-8521-48da-9622-4b886b3b212e/targets/test
infra-commitc9f705718
job-versionv1.18.0-alpha.1.943+f680c261e69bf6
pod5ceb27e9-3b7d-11ea-b4a5-fe173d511a50
repok8s.io/kubernetes
repo-commitf680c261e69bf64ceb496ec65e74d18e14637011
repos{u'k8s.io/kubernetes': u'master'}
revisionv1.18.0-alpha.1.943+f680c261e69bf6

Test Failures


Node Tests 38m4s

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=cri-containerd-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Serial\]" --test_args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/usr/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/image-config.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

... skipping 347 lines ...
W0120 12:09:48.177] runcmd:
W0120 12:09:48.177]   - mount /tmp /tmp -o remount,exec,suid
W0120 12:09:48.177]   - mkdir -p /home/containerd
W0120 12:09:48.177]   - mount --bind /home/containerd /home/containerd
W0120 12:09:48.177]   - mount -o remount,exec /home/containerd
W0120 12:09:48.177]   - mkdir -p /etc/containerd
W0120 12:09:48.177]   - 'curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" -o /home/containerd/cni.template http://metadata.google.internal/computeMetadata/v1/instance/attributes/cni-template'
W0120 12:09:48.178]   - 'curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" -o /etc/containerd/config.toml http://metadata.google.internal/computeMetadata/v1/instance/attributes/containerd-config'
W0120 12:09:48.178]   - 'curl --fail --retry 5 --retry-delay 3 --silent --show-error -o /home/containerd/cni.tgz https://storage.googleapis.com/kubernetes-release/network-plugins/cni-plugins-amd64-v0.7.5.tgz'
W0120 12:09:48.178]   - tar xzf /home/containerd/cni.tgz -C /home/containerd --overwrite
W0120 12:09:48.178]   - systemctl restart containerd
W0120 12:09:48.178] ]
W0120 12:09:48.387] I0120 12:09:48.387166    4507 run_remote.go:500] Found image "\"ubuntu-gke-1604-d1703-0-v20181112\" created \"2018-11-13 12:00:37.552 -0800 -0800\"" based on regex "" and family "pipeline-2" in project "ubuntu-os-gke-cloud"
W0120 12:09:48.387] I0120 12:09:48.387206    4507 run_remote.go:500] Found image "\"ubuntu-gke-1604-d1703-0-v20190124\" created \"2019-02-11 09:50:11.848 -0800 -0800\"" based on regex "" and family "pipeline-2" in project "ubuntu-os-gke-cloud"
W0120 12:09:48.388] I0120 12:09:48.387223    4507 run_remote.go:500] Found image "\"ubuntu-gke-1604-d1703-0-v20190212\" created \"2019-02-14 14:44:15.266 -0800 -0800\"" based on regex "" and family "pipeline-2" in project "ubuntu-os-gke-cloud"
... skipping 112 lines ...
W0120 12:09:48.422] runcmd:
W0120 12:09:48.423]   - mount /tmp /tmp -o remount,exec,suid
W0120 12:09:48.423]   - mkdir -p /home/containerd
W0120 12:09:48.423]   - mount --bind /home/containerd /home/containerd
W0120 12:09:48.423]   - mount -o remount,exec /home/containerd
W0120 12:09:48.423]   - mkdir -p /etc/containerd
W0120 12:09:48.423]   - 'curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" -o /home/containerd/cni.template http://metadata.google.internal/computeMetadata/v1/instance/attributes/cni-template'
W0120 12:09:48.424]   - 'curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" -o /etc/containerd/config.toml http://metadata.google.internal/computeMetadata/v1/instance/attributes/containerd-config'
W0120 12:09:48.424]   - 'curl --fail --retry 5 --retry-delay 3 --silent --show-error -o /home/containerd/cni.tgz https://storage.googleapis.com/kubernetes-release/network-plugins/cni-plugins-amd64-v0.7.5.tgz'
W0120 12:09:48.424]   - tar xzf /home/containerd/cni.tgz -C /home/containerd --overwrite
W0120 12:09:48.424]   - systemctl restart containerd
W0120 12:09:48.424] ]
W0120 12:09:48.425] I0120 12:09:48.388180    4507 remote.go:41] Building archive...
W0120 12:09:48.425] I0120 12:09:48.388334    4507 build.go:42] Building k8s binaries...
I0120 12:09:48.525] Initializing e2e tests using image cos-stable.
... skipping 352 lines ...
I0120 12:46:30.608]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:211
I0120 12:46:30.609] Jan 20 12:18:34.872: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-375990ca-0950-46c8-a3aa-21a9e61e74e5" in namespace "security-context-test-6902" to be "success or failure"
I0120 12:46:30.609] Jan 20 12:18:34.881: INFO: Pod "busybox-readonly-true-375990ca-0950-46c8-a3aa-21a9e61e74e5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.124566ms
I0120 12:46:30.609] Jan 20 12:18:36.883: INFO: Pod "busybox-readonly-true-375990ca-0950-46c8-a3aa-21a9e61e74e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010450368s
I0120 12:46:30.610] Jan 20 12:18:38.884: INFO: Pod "busybox-readonly-true-375990ca-0950-46c8-a3aa-21a9e61e74e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012119314s
I0120 12:46:30.610] Jan 20 12:18:40.886: INFO: Pod "busybox-readonly-true-375990ca-0950-46c8-a3aa-21a9e61e74e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013679136s
I0120 12:46:30.610] Jan 20 12:18:42.889: INFO: Pod "busybox-readonly-true-375990ca-0950-46c8-a3aa-21a9e61e74e5": Phase="Failed", Reason="", readiness=false. Elapsed: 8.016530228s
I0120 12:46:30.610] Jan 20 12:18:42.889: INFO: Pod "busybox-readonly-true-375990ca-0950-46c8-a3aa-21a9e61e74e5" satisfied condition "success or failure"
I0120 12:46:30.611] [AfterEach] [k8s.io] Security Context
I0120 12:46:30.611]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0120 12:46:30.611] Jan 20 12:18:42.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0120 12:46:30.611] STEP: Destroying namespace "security-context-test-6902" for this suite.
I0120 12:46:30.611] 
... skipping 1018 lines ...
I0120 12:46:30.819] STEP: Creating a kubernetes client
I0120 12:46:30.819] STEP: Building a namespace api object, basename container-runtime
I0120 12:46:30.819] Jan 20 12:19:31.828: INFO: Skipping waiting for service account
I0120 12:46:30.820] [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
I0120 12:46:30.820]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
I0120 12:46:30.820] STEP: create the container
I0120 12:46:30.820] STEP: wait for the container to reach Failed
I0120 12:46:30.820] STEP: get the container status
I0120 12:46:30.820] STEP: the container should be terminated
I0120 12:46:30.821] STEP: the termination message should be set
I0120 12:46:30.821] Jan 20 12:19:34.907: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
I0120 12:46:30.821] STEP: delete the container
I0120 12:46:30.821] [AfterEach] [k8s.io] Container Runtime
... skipping 1866 lines ...
I0120 12:46:31.196]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
I0120 12:46:31.197] STEP: Creating a kubernetes client
I0120 12:46:31.197] STEP: Building a namespace api object, basename init-container
I0120 12:46:31.197] Jan 20 12:22:20.110: INFO: Skipping waiting for service account
I0120 12:46:31.197] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0120 12:46:31.197]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
I0120 12:46:31.198] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0120 12:46:31.198]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
I0120 12:46:31.198] STEP: creating the pod
I0120 12:46:31.198] Jan 20 12:22:20.110: INFO: PodSpec: initContainers in spec.initContainers
I0120 12:46:31.198] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0120 12:46:31.199]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0120 12:46:31.199] Jan 20 12:22:22.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 340 lines ...
I0120 12:46:31.261]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
I0120 12:46:31.262] STEP: Creating a kubernetes client
I0120 12:46:31.262] STEP: Building a namespace api object, basename init-container
I0120 12:46:31.262] Jan 20 12:22:23.905: INFO: Skipping waiting for service account
I0120 12:46:31.262] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0120 12:46:31.262]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
I0120 12:46:31.263] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0120 12:46:31.263]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
I0120 12:46:31.263] STEP: creating the pod
I0120 12:46:31.263] Jan 20 12:22:23.905: INFO: PodSpec: initContainers in spec.initContainers
I0120 12:46:31.272] Jan 20 12:23:08.316: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-604df330-5cd8-4fa1-b81d-11cc0906aec1", GenerateName:"", Namespace:"init-container-5843", SelfLink:"/api/v1/namespaces/init-container-5843/pods/pod-init-604df330-5cd8-4fa1-b81d-11cc0906aec1", UID:"9ab0e6f4-4170-4cf6-a7c5-80d9392b9e9a", ResourceVersion:"2825", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715119743, loc:(*time.Location)(0x84e3dc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"905385302"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0009786a0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-6e01545f-cos-73-11647-415-0", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0011d8360), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000978710)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000978730)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000978770), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000978774), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715119743, loc:(*time.Location)(0x84e3dc0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715119743, loc:(*time.Location)(0x84e3dc0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715119743, loc:(*time.Location)(0x84e3dc0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715119743, loc:(*time.Location)(0x84e3dc0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.64", PodIP:"10.100.0.133", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.100.0.133"}}, StartTime:(*v1.Time)(0xc0005c2560), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00088a8c0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00088a930)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"containerd://59d8dcf65da18f512824d34b1172f6681af93046bdb0931d2ef7edd1661c3937", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0005c2580), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0005c25a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0009789ac)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
I0120 12:46:31.272] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0120 12:46:31.272]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0120 12:46:31.273] Jan 20 12:23:08.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0120 12:46:31.273] STEP: Destroying namespace "init-container-5843" for this suite.
I0120 12:46:31.273] 
I0120 12:46:31.273] 
I0120 12:46:31.273] • [SLOW TEST:44.438 seconds]
I0120 12:46:31.273] [k8s.io] InitContainer [NodeConformance]
I0120 12:46:31.273] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
I0120 12:46:31.274]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0120 12:46:31.274]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
I0120 12:46:31.274] ------------------------------
I0120 12:46:31.274] [BeforeEach] [sig-storage] EmptyDir volumes
I0120 12:46:31.275]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
I0120 12:46:31.275] STEP: Creating a kubernetes client
I0120 12:46:31.275] STEP: Building a namespace api object, basename emptydir
... skipping 176 lines ...
I0120 12:46:31.310] STEP: verifying the pod is in kubernetes
I0120 12:46:31.310] STEP: updating the pod
I0120 12:46:31.310] Jan 20 12:23:17.344: INFO: Successfully updated pod "pod-update-activedeadlineseconds-be9ee31a-73af-422f-b024-dda05f78db78"
I0120 12:46:31.310] Jan 20 12:23:17.344: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-be9ee31a-73af-422f-b024-dda05f78db78" in namespace "pods-4352" to be "terminated due to deadline exceeded"
I0120 12:46:31.311] Jan 20 12:23:17.345: INFO: Pod "pod-update-activedeadlineseconds-be9ee31a-73af-422f-b024-dda05f78db78": Phase="Running", Reason="", readiness=true. Elapsed: 1.280162ms
I0120 12:46:31.311] Jan 20 12:23:19.347: INFO: Pod "pod-update-activedeadlineseconds-be9ee31a-73af-422f-b024-dda05f78db78": Phase="Running", Reason="", readiness=true. Elapsed: 2.002783248s
I0120 12:46:31.312] Jan 20 12:23:21.348: INFO: Pod "pod-update-activedeadlineseconds-be9ee31a-73af-422f-b024-dda05f78db78": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.004143623s
I0120 12:46:31.312] Jan 20 12:23:21.348: INFO: Pod "pod-update-activedeadlineseconds-be9ee31a-73af-422f-b024-dda05f78db78" satisfied condition "terminated due to deadline exceeded"
I0120 12:46:31.312] [AfterEach] [k8s.io] Pods
I0120 12:46:31.312]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0120 12:46:31.313] Jan 20 12:23:21.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0120 12:46:31.313] STEP: Destroying namespace "pods-4352" for this suite.
I0120 12:46:31.313] 
... skipping 679 lines ...
I0120 12:46:31.432] Jan 20 12:23:44.064: INFO: Skipping waiting for service account
I0120 12:46:31.433] [It] should not be able to pull from private registry without secret [NodeConformance]
I0120 12:46:31.433]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:380
I0120 12:46:31.433] STEP: create the container
I0120 12:46:31.433] STEP: check the container status
I0120 12:46:31.433] STEP: delete the container
I0120 12:46:31.433] Jan 20 12:28:44.651: INFO: No.1 attempt failed: expected container state: Waiting, got: "Running", retrying...
I0120 12:46:31.433] STEP: create the container
I0120 12:46:31.434] STEP: check the container status
I0120 12:46:31.434] STEP: delete the container
I0120 12:46:31.434] [AfterEach] [k8s.io] Container Runtime
I0120 12:46:31.434]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0120 12:46:31.434] Jan 20 12:28:46.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 48 lines ...
I0120 12:46:31.443] I0120 12:46:25.410758     990 services.go:156] Get log file "containerd.log" with journalctl command [-u containerd].
I0120 12:46:31.444] I0120 12:46:26.208556     990 services.go:156] Get log file "kubelet.log" with journalctl command [-u kubelet-20200120T121610.service].
I0120 12:46:31.444] I0120 12:46:28.335032     990 e2e_node_suite_test.go:221] Tests Finished
I0120 12:46:31.444] 
I0120 12:46:31.444] 
I0120 12:46:31.444] Ran 157 of 315 Specs in 1803.408 seconds
I0120 12:46:31.444] SUCCESS! -- 157 Passed | 0 Failed | 0 Pending | 158 Skipped
I0120 12:46:31.445] 
I0120 12:46:31.445] 
I0120 12:46:31.445] Ginkgo ran 1 suite in 30m5.303685735s
I0120 12:46:31.445] Test Suite Passed
I0120 12:46:31.445] 
I0120 12:46:31.445] Failure Finished Test Suite on Host tmp-node-e2e-6e01545f-cos-73-11647-415-0
I0120 12:46:31.446] command [scp -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine -r prow@35.227.168.60:/tmp/node-e2e-20200120T121610/results/*.log /workspace/_artifacts/tmp-node-e2e-6e01545f-cos-73-11647-415-0] failed with error: exit status 1
I0120 12:46:31.446] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0120 12:46:31.446] <                              FINISH TEST                               <
I0120 12:46:31.446] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0120 12:46:31.446] 
W0120 12:46:58.979] I0120 12:46:58.979029    4507 remote.go:123] Copying test artifacts from "tmp-node-e2e-6e01545f-ubuntu-gke-1804-d1809-0-v20200117"
W0120 12:46:59.950] I0120 12:46:59.947542    4507 run_remote.go:772] Deleting instance "tmp-node-e2e-6e01545f-ubuntu-gke-1804-d1809-0-v20200117"
... skipping 288 lines ...
I0120 12:47:00.930]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:211
I0120 12:47:00.930] Jan 20 12:18:38.465: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-91a387b4-13ad-45c3-9d6c-2fc24b729259" in namespace "security-context-test-5466" to be "success or failure"
I0120 12:47:00.931] Jan 20 12:18:38.471: INFO: Pod "busybox-readonly-true-91a387b4-13ad-45c3-9d6c-2fc24b729259": Phase="Pending", Reason="", readiness=false. Elapsed: 5.403392ms
I0120 12:47:00.931] Jan 20 12:18:40.473: INFO: Pod "busybox-readonly-true-91a387b4-13ad-45c3-9d6c-2fc24b729259": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007209445s
I0120 12:47:00.931] Jan 20 12:18:42.475: INFO: Pod "busybox-readonly-true-91a387b4-13ad-45c3-9d6c-2fc24b729259": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009059615s
I0120 12:47:00.931] Jan 20 12:18:44.476: INFO: Pod "busybox-readonly-true-91a387b4-13ad-45c3-9d6c-2fc24b729259": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010277351s
I0120 12:47:00.932] Jan 20 12:18:46.477: INFO: Pod "busybox-readonly-true-91a387b4-13ad-45c3-9d6c-2fc24b729259": Phase="Failed", Reason="", readiness=false. Elapsed: 8.011635663s
I0120 12:47:00.932] Jan 20 12:18:46.477: INFO: Pod "busybox-readonly-true-91a387b4-13ad-45c3-9d6c-2fc24b729259" satisfied condition "success or failure"
I0120 12:47:00.932] [AfterEach] [k8s.io] Security Context
I0120 12:47:00.932]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0120 12:47:00.933] Jan 20 12:18:46.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0120 12:47:00.933] STEP: Destroying namespace "security-context-test-5466" for this suite.
I0120 12:47:00.933] 
... skipping 1002 lines ...
I0120 12:47:01.139] STEP: Creating a kubernetes client
I0120 12:47:01.139] STEP: Building a namespace api object, basename container-runtime
I0120 12:47:01.139] Jan 20 12:19:33.889: INFO: Skipping waiting for service account
I0120 12:47:01.139] [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
I0120 12:47:01.139]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
I0120 12:47:01.140] STEP: create the container
I0120 12:47:01.140] STEP: wait for the container to reach Failed
I0120 12:47:01.140] STEP: get the container status
I0120 12:47:01.140] STEP: the container should be terminated
I0120 12:47:01.140] STEP: the termination message should be set
I0120 12:47:01.140] Jan 20 12:19:36.953: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
I0120 12:47:01.140] STEP: delete the container
I0120 12:47:01.141] [AfterEach] [k8s.io] Container Runtime
... skipping 1831 lines ...
I0120 12:47:01.493]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
I0120 12:47:01.493] STEP: Creating a kubernetes client
I0120 12:47:01.493] STEP: Building a namespace api object, basename init-container
I0120 12:47:01.493] Jan 20 12:22:12.506: INFO: Skipping waiting for service account
I0120 12:47:01.494] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0120 12:47:01.494]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
I0120 12:47:01.494] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0120 12:47:01.494]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
I0120 12:47:01.495] STEP: creating the pod
I0120 12:47:01.495] Jan 20 12:22:12.506: INFO: PodSpec: initContainers in spec.initContainers
I0120 12:47:01.495] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0120 12:47:01.495]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0120 12:47:01.495] Jan 20 12:22:14.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 310 lines ...
I0120 12:47:01.553]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
I0120 12:47:01.553] STEP: Creating a kubernetes client
I0120 12:47:01.554] STEP: Building a namespace api object, basename init-container
I0120 12:47:01.554] Jan 20 12:22:19.456: INFO: Skipping waiting for service account
I0120 12:47:01.554] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0120 12:47:01.554]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
I0120 12:47:01.554] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0120 12:47:01.554]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
I0120 12:47:01.555] STEP: creating the pod
I0120 12:47:01.555] Jan 20 12:22:19.456: INFO: PodSpec: initContainers in spec.initContainers
I0120 12:47:01.562] Jan 20 12:22:58.033: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-f139affa-638a-486f-ba41-8e3d8548f3d4", GenerateName:"", Namespace:"init-container-6106", SelfLink:"/api/v1/namespaces/init-container-6106/pods/pod-init-f139affa-638a-486f-ba41-8e3d8548f3d4", UID:"4e803860-d41f-4ba2-b111-a0e7ffca0707", ResourceVersion:"2745", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715119739, loc:(*time.Location)(0x84e3dc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"456218081"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000a49240), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-6e01545f-ubuntu-gke-1804-d1809-0-v20200117", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000fb0060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000a492c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000a492e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000a492f0), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000a492f4), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715119739, loc:(*time.Location)(0x84e3dc0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715119739, loc:(*time.Location)(0x84e3dc0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715119739, loc:(*time.Location)(0x84e3dc0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715119739, loc:(*time.Location)(0x84e3dc0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.63", PodIP:"10.100.0.134", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.100.0.134"}}, StartTime:(*v1.Time)(0xc000914960), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0009149e0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0007de150)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"containerd://b702565c7802aee6a958df7d0e7c00d7d43d24eb6a275c520b006780712d8201", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000914a60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000914ac0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc000a493f4)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
I0120 12:47:01.563] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0120 12:47:01.563]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0120 12:47:01.563] Jan 20 12:22:58.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0120 12:47:01.563] STEP: Destroying namespace "init-container-6106" for this suite.
I0120 12:47:01.563] 
I0120 12:47:01.563] 
I0120 12:47:01.564] • [SLOW TEST:38.594 seconds]
I0120 12:47:01.564] [k8s.io] InitContainer [NodeConformance]
I0120 12:47:01.564] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
I0120 12:47:01.564]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0120 12:47:01.564]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
I0120 12:47:01.564] ------------------------------
I0120 12:47:01.564] [BeforeEach] [k8s.io] MirrorPod
I0120 12:47:01.565]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
I0120 12:47:01.565] STEP: Creating a kubernetes client
I0120 12:47:01.565] STEP: Building a namespace api object, basename mirror-pod
... skipping 158 lines ...
I0120 12:47:01.594] STEP: verifying the pod is in kubernetes
I0120 12:47:01.594] STEP: updating the pod
I0120 12:47:01.594] Jan 20 12:23:08.136: INFO: Successfully updated pod "pod-update-activedeadlineseconds-bfd21ea7-5603-492e-af15-8344e6dc4a81"
I0120 12:47:01.594] Jan 20 12:23:08.136: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-bfd21ea7-5603-492e-af15-8344e6dc4a81" in namespace "pods-1816" to be "terminated due to deadline exceeded"
I0120 12:47:01.595] Jan 20 12:23:08.138: INFO: Pod "pod-update-activedeadlineseconds-bfd21ea7-5603-492e-af15-8344e6dc4a81": Phase="Running", Reason="", readiness=true. Elapsed: 2.652288ms
I0120 12:47:01.595] Jan 20 12:23:10.140: INFO: Pod "pod-update-activedeadlineseconds-bfd21ea7-5603-492e-af15-8344e6dc4a81": Phase="Running", Reason="", readiness=true. Elapsed: 2.004254461s
I0120 12:47:01.595] Jan 20 12:23:12.142: INFO: Pod "pod-update-activedeadlineseconds-bfd21ea7-5603-492e-af15-8344e6dc4a81": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.00586931s
I0120 12:47:01.596] Jan 20 12:23:12.142: INFO: Pod "pod-update-activedeadlineseconds-bfd21ea7-5603-492e-af15-8344e6dc4a81" satisfied condition "terminated due to deadline exceeded"
I0120 12:47:01.596] [AfterEach] [k8s.io] Pods
I0120 12:47:01.596]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0120 12:47:01.596] Jan 20 12:23:12.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0120 12:47:01.596] STEP: Destroying namespace "pods-1816" for this suite.
I0120 12:47:01.597] 
... skipping 742 lines ...
I0120 12:47:01.747] Jan 20 12:23:39.685: INFO: Skipping waiting for service account
I0120 12:47:01.747] [It] should not be able to pull from private registry without secret [NodeConformance]
I0120 12:47:01.747]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:380
I0120 12:47:01.747] STEP: create the container
I0120 12:47:01.748] STEP: check the container status
I0120 12:47:01.748] STEP: delete the container
I0120 12:47:01.748] Jan 20 12:28:40.261: INFO: No.1 attempt failed: expected container state: Waiting, got: "Running", retrying...
I0120 12:47:01.748] STEP: create the container
I0120 12:47:01.748] STEP: check the container status
I0120 12:47:01.748] STEP: delete the container
I0120 12:47:01.749] [AfterEach] [k8s.io] Container Runtime
I0120 12:47:01.749]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0120 12:47:01.749] Jan 20 12:28:41.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 48 lines ...
I0120 12:47:01.759] I0120 12:46:57.908579    2876 services.go:156] Get log file "containerd.log" with journalctl command [-u containerd].
I0120 12:47:01.759] I0120 12:46:58.236200    2876 services.go:156] Get log file "kubelet.log" with journalctl command [-u kubelet-20200120T121610.service].
I0120 12:47:01.759] I0120 12:46:58.909602    2876 e2e_node_suite_test.go:221] Tests Finished
I0120 12:47:01.759] 
I0120 12:47:01.759] 
I0120 12:47:01.760] Ran 157 of 315 Specs in 1833.173 seconds
I0120 12:47:01.760] SUCCESS! -- 157 Passed | 0 Failed | 0 Pending | 158 Skipped
I0120 12:47:01.760] 
I0120 12:47:01.760] 
I0120 12:47:01.760] Ginkgo ran 1 suite in 30m34.914843989s
I0120 12:47:01.760] Test Suite Passed
I0120 12:47:01.760] 
I0120 12:47:01.761] Failure Finished Test Suite on Host tmp-node-e2e-6e01545f-ubuntu-gke-1804-d1809-0-v20200117
I0120 12:47:01.761] command [scp -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine -r prow@35.233.146.214:/tmp/node-e2e-20200120T121610/results/*.log /workspace/_artifacts/tmp-node-e2e-6e01545f-ubuntu-gke-1804-d1809-0-v20200117] failed with error: exit status 1
I0120 12:47:01.761] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0120 12:47:01.761] <                              FINISH TEST                               <
I0120 12:47:01.762] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0120 12:47:01.762] 
I0120 12:47:01.762] Failure: 2 errors encountered.
W0120 12:47:01.866] exit status 1
W0120 12:47:03.059] 2020/01/20 12:47:03 process.go:155: Step 'go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=cri-containerd-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Serial\]" --test_args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/usr/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/image-config.yaml' finished in 38m4.247792397s
W0120 12:47:03.059] 2020/01/20 12:47:03 node.go:42: Noop - Node DumpClusterLogs() - /workspace/_artifacts: 
W0120 12:47:03.059] 2020/01/20 12:47:03 node.go:52: Noop - Node Down()
W0120 12:47:03.060] 2020/01/20 12:47:03 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0120 12:47:03.060] 2020/01/20 12:47:03 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"
W0120 12:47:03.782] 2020/01/20 12:47:03 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 727.134147ms
W0120 12:47:03.784] 2020/01/20 12:47:03 main.go:316: Something went wrong: encountered 1 errors: [error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=cri-containerd-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Serial\]" --test_args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/usr/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/image-config.yaml: exit status 1]
W0120 12:47:03.803] Traceback (most recent call last):
W0120 12:47:03.803]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0120 12:47:03.803]     main(parse_args())
W0120 12:47:03.803]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0120 12:47:03.803]     mode.start(runner_args)
W0120 12:47:03.804]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0120 12:47:03.804]     check_env(env, self.command, *args)
W0120 12:47:03.804]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0120 12:47:03.804]     subprocess.check_call(cmd, env=env)
W0120 12:47:03.804]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0120 12:47:03.805]     raise CalledProcessError(retcode, cmd)
W0120 12:47:03.806] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--deployment=node', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--node-args=--image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/image-config.yaml', '--gcp-project=cri-containerd-node-e2e', '--gcp-zone=us-west1-b', '--node-test-args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/usr/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\\"name\\": \\"containerd.log\\", \\"journalctl\\": [\\"-u\\", \\"containerd\\"]}"', '--node-tests=true', '--test_args=--nodes=8 --focus="\\[NodeConformance\\]" --skip="\\[Flaky\\]|\\[Serial\\]"', '--timeout=65m')' returned non-zero exit status 1
E0120 12:47:03.842] Command failed
I0120 12:47:03.843] process 317 exited with code 1 after 38.1m
E0120 12:47:03.843] FAIL: ci-cos-containerd-node-e2e
I0120 12:47:03.844] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0120 12:47:04.825] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0120 12:47:04.934] process 40710 exited with code 0 after 0.0m
I0120 12:47:04.934] Call:  gcloud config get-value account
I0120 12:47:05.676] process 40723 exited with code 0 after 0.0m
I0120 12:47:05.677] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0120 12:47:05.677] Upload result and artifacts...
I0120 12:47:05.678] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-cos-containerd-node-e2e/1219229930714304512
I0120 12:47:05.679] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-cos-containerd-node-e2e/1219229930714304512/artifacts
W0120 12:47:08.003] CommandException: One or more URLs matched no objects.
E0120 12:47:08.320] Command failed
I0120 12:47:08.320] process 40736 exited with code 1 after 0.0m
W0120 12:47:08.320] Remote dir gs://kubernetes-jenkins/logs/ci-cos-containerd-node-e2e/1219229930714304512/artifacts not exist yet
I0120 12:47:08.321] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-cos-containerd-node-e2e/1219229930714304512/artifacts
I0120 12:47:11.958] process 40881 exited with code 0 after 0.1m
I0120 12:47:11.959] Call:  git rev-parse HEAD
I0120 12:47:11.972] process 41414 exited with code 0 after 0.0m
... skipping 13 lines ...