This job view page is being replaced by Spyglass soon. Check out the new job view.
PRhormes: add heartbeat inside watch
ResultFAILURE
Tests 1 failed / 475 succeeded
Started2019-03-21 02:21
Elapsed18m37s
Revision
Buildergke-prow-containerd-pool-99179761-hr9n
Refs master:ed4258e5
75474:0d6751c9
pode82f8010-4b7f-11e9-b980-0a580a6c0aad
infra-commit524058c8d
job-versionv1.15.0-alpha.0.1381+5a45f18c9bfa50
pode82f8010-4b7f-11e9-b980-0a580a6c0aad
repok8s.io/kubernetes
repo-commit5a45f18c9bfa506a522a77dec1d76cb8199eedf7
repos{u'k8s.io/kubernetes': u'master:ed4258e5c0d722425b1c7744b2bf09ad0d9fbfea,75474:0d6751c9edc9dc1b283538ccd2a14673a6d7d28c'}
revisionv1.15.0-alpha.0.1381+5a45f18c9bfa50

Test Failures


Node Tests 17m17s

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 475 Passed Tests

Show 424 Skipped Tests

Error lines from build-log.txt

... skipping 326 lines ...
W0321 02:26:57.158] I0321 02:26:57.157570    4505 utils.go:117] Killing any existing node processes on "tmp-node-e2e-37b7fc6c-cos-stable-63-10032-71-0"
W0321 02:26:58.213] I0321 02:26:58.213122    4505 node_e2e.go:108] GCI/COS node and GCI/COS mounter both detected, modifying --experimental-mounter-path accordingly
W0321 02:26:58.213] I0321 02:26:58.213198    4505 node_e2e.go:164] Starting tests on "tmp-node-e2e-37b7fc6c-cos-stable-60-9592-84-0"
W0321 02:26:58.367] I0321 02:26:58.367059    4505 node_e2e.go:108] GCI/COS node and GCI/COS mounter both detected, modifying --experimental-mounter-path accordingly
W0321 02:26:58.367] I0321 02:26:58.367104    4505 node_e2e.go:164] Starting tests on "tmp-node-e2e-37b7fc6c-cos-stable-63-10032-71-0"
W0321 02:26:58.406] I0321 02:26:58.405907    4505 node_e2e.go:164] Starting tests on "tmp-node-e2e-37b7fc6c-coreos-beta-1883-1-0-v20180911"
W0321 02:29:04.649] I0321 02:29:04.648807    4505 remote.go:197] Test failed unexpectedly. Attempting to retrieving system logs (only works for nodes with journald)
W0321 02:29:05.380] I0321 02:29:05.380216    4505 remote.go:202] Got the system logs from journald; copying it back...
W0321 02:29:06.353] I0321 02:29:06.352911    4505 remote.go:122] Copying test artifacts from "tmp-node-e2e-37b7fc6c-ubuntu-gke-1804-d1703-0-v20181113"
W0321 02:29:07.615] I0321 02:29:07.615333    4505 run_remote.go:718] Deleting instance "tmp-node-e2e-37b7fc6c-ubuntu-gke-1804-d1703-0-v20181113"
I0321 02:29:08.128] 
I0321 02:29:08.129] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0321 02:29:08.129] >                              START TEST                                >
I0321 02:29:08.129] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0321 02:29:08.129] Start Test Suite on Host tmp-node-e2e-37b7fc6c-ubuntu-gke-1804-d1703-0-v20181113
I0321 02:29:08.129] 
I0321 02:29:08.129] Failure Finished Test Suite on Host tmp-node-e2e-37b7fc6c-ubuntu-gke-1804-d1703-0-v20181113
I0321 02:29:08.130] [failed to install cni plugin on "tmp-node-e2e-37b7fc6c-ubuntu-gke-1804-d1703-0-v20181113": command [ssh -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@35.233.154.139 -- sudo sh -c 'mkdir -p /tmp/node-e2e-20190321T022645/cni/bin ; curl -s -L https://dl.k8s.io/network-plugins/cni-plugins-amd64-v0.7.5.tgz | tar -xz -C /tmp/node-e2e-20190321T022645/cni/bin'] failed with error: exit status 2 output: "\ngzip: stdin: unexpected end of file\ntar: Child returned status 1\ntar: Error is not recoverable: exiting now\n", command [scp -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine -r prow@35.233.154.139:/tmp/node-e2e-20190321T022645/results/*.log /workspace/_artifacts/tmp-node-e2e-37b7fc6c-ubuntu-gke-1804-d1703-0-v20181113] failed with error: exit status 1]
I0321 02:29:08.130] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0321 02:29:08.130] <                              FINISH TEST                               <
I0321 02:29:08.130] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0321 02:29:08.131] 
W0321 02:36:27.252] I0321 02:36:27.252498    4505 remote.go:122] Copying test artifacts from "tmp-node-e2e-37b7fc6c-cos-stable-63-10032-71-0"
W0321 02:36:32.626] I0321 02:36:32.626222    4505 run_remote.go:718] Deleting instance "tmp-node-e2e-37b7fc6c-cos-stable-63-10032-71-0"
... skipping 258 lines ...
I0321 02:36:33.164] STEP: Creating a kubernetes client
I0321 02:36:33.164] STEP: Building a namespace api object, basename init-container
I0321 02:36:33.164] Mar 21 02:28:13.714: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
I0321 02:36:33.165] Mar 21 02:28:13.714: INFO: Skipping waiting for service account
I0321 02:36:33.165] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0321 02:36:33.165]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0321 02:36:33.165] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0321 02:36:33.165]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
I0321 02:36:33.165] STEP: creating the pod
I0321 02:36:33.166] Mar 21 02:28:13.714: INFO: PodSpec: initContainers in spec.initContainers
I0321 02:36:33.166] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0321 02:36:33.166]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0321 02:36:33.166] Mar 21 02:28:19.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
I0321 02:36:33.167] Mar 21 02:28:27.271: INFO: namespace init-container-2745 deletion completed in 8.1371992s
I0321 02:36:33.167] 
I0321 02:36:33.167] 
I0321 02:36:33.167] • [SLOW TEST:13.638 seconds]
I0321 02:36:33.167] [k8s.io] InitContainer [NodeConformance]
I0321 02:36:33.168] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
I0321 02:36:33.168]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0321 02:36:33.168]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
I0321 02:36:33.168] ------------------------------
I0321 02:36:33.168] SSS
I0321 02:36:33.169] ------------------------------
I0321 02:36:33.169] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0321 02:36:33.169]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
... skipping 248 lines ...
I0321 02:36:33.210] [BeforeEach] [k8s.io] Security Context
I0321 02:36:33.211]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:35
I0321 02:36:33.211] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
I0321 02:36:33.211]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:138
I0321 02:36:33.211] Mar 21 02:28:32.209: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-0531ce05-4b81-11e9-8431-42010a8a0064" in namespace "security-context-test-8359" to be "success or failure"
I0321 02:36:33.211] Mar 21 02:28:32.215: INFO: Pod "busybox-readonly-true-0531ce05-4b81-11e9-8431-42010a8a0064": Phase="Pending", Reason="", readiness=false. Elapsed: 6.668762ms
I0321 02:36:33.212] Mar 21 02:28:34.217: INFO: Pod "busybox-readonly-true-0531ce05-4b81-11e9-8431-42010a8a0064": Phase="Failed", Reason="", readiness=false. Elapsed: 2.00857643s
I0321 02:36:33.212] Mar 21 02:28:34.217: INFO: Pod "busybox-readonly-true-0531ce05-4b81-11e9-8431-42010a8a0064" satisfied condition "success or failure"
I0321 02:36:33.212] [AfterEach] [k8s.io] Security Context
I0321 02:36:33.212]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0321 02:36:33.212] Mar 21 02:28:34.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0321 02:36:33.212] STEP: Destroying namespace "security-context-test-8359" for this suite.
I0321 02:36:33.212] Mar 21 02:28:40.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 408 lines ...
I0321 02:36:33.280] STEP: Creating a kubernetes client
I0321 02:36:33.280] STEP: Building a namespace api object, basename container-runtime
I0321 02:36:33.280] Mar 21 02:28:55.727: INFO: Skipping waiting for service account
I0321 02:36:33.280] [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance]
I0321 02:36:33.281]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:191
I0321 02:36:33.281] STEP: create the container
I0321 02:36:33.281] STEP: wait for the container to reach Failed
I0321 02:36:33.281] STEP: get the container status
I0321 02:36:33.281] STEP: the container should be terminated
I0321 02:36:33.281] STEP: the termination message should be set
I0321 02:36:33.281] Mar 21 02:28:58.801: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
I0321 02:36:33.282] STEP: delete the container
I0321 02:36:33.282] [AfterEach] [k8s.io] Container Runtime
... skipping 2753 lines ...
I0321 02:36:33.692] Mar 21 02:33:11.440: INFO: Pod "podaa6c882a-4b81-11e9-9bc4-42010a8a0064": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019842847s
I0321 02:36:33.692] STEP: Saw pod success
I0321 02:36:33.692] Mar 21 02:33:11.440: INFO: Pod "podaa6c882a-4b81-11e9-9bc4-42010a8a0064" satisfied condition "success or failure"
I0321 02:36:33.692] STEP: Verifying the memory backed volume was removed from node
I0321 02:36:33.692] Mar 21 02:33:11.466: INFO: Waiting up to 5m0s for pod "podaba9c20d-4b81-11e9-9bc4-42010a8a0064" in namespace "kubelet-volume-manager-488" to be "success or failure"
I0321 02:36:33.693] Mar 21 02:33:11.497: INFO: Pod "podaba9c20d-4b81-11e9-9bc4-42010a8a0064": Phase="Pending", Reason="", readiness=false. Elapsed: 30.571021ms
I0321 02:36:33.693] Mar 21 02:33:13.499: INFO: Pod "podaba9c20d-4b81-11e9-9bc4-42010a8a0064": Phase="Failed", Reason="", readiness=false. Elapsed: 2.032879571s
I0321 02:36:33.693] Mar 21 02:33:23.518: INFO: Waiting up to 5m0s for pod "podb2dc42ea-4b81-11e9-9bc4-42010a8a0064" in namespace "kubelet-volume-manager-488" to be "success or failure"
I0321 02:36:33.693] Mar 21 02:33:23.540: INFO: Pod "podb2dc42ea-4b81-11e9-9bc4-42010a8a0064": Phase="Pending", Reason="", readiness=false. Elapsed: 22.328413ms
I0321 02:36:33.693] Mar 21 02:33:25.542: INFO: Pod "podb2dc42ea-4b81-11e9-9bc4-42010a8a0064": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024647636s
I0321 02:36:33.694] STEP: Saw pod success
I0321 02:36:33.694] Mar 21 02:33:25.542: INFO: Pod "podb2dc42ea-4b81-11e9-9bc4-42010a8a0064" satisfied condition "success or failure"
I0321 02:36:33.694] [AfterEach] [k8s.io] Kubelet Volume Manager
... skipping 167 lines ...
I0321 02:36:33.713] STEP: verifying the pod is in kubernetes
I0321 02:36:33.713] STEP: updating the pod
I0321 02:36:33.714] Mar 21 02:33:42.444: INFO: Successfully updated pod "pod-update-activedeadlineseconds-bc9b07ea-4b81-11e9-9bc4-42010a8a0064"
I0321 02:36:33.714] Mar 21 02:33:42.444: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-bc9b07ea-4b81-11e9-9bc4-42010a8a0064" in namespace "pods-4441" to be "terminated due to deadline exceeded"
I0321 02:36:33.714] Mar 21 02:33:42.445: INFO: Pod "pod-update-activedeadlineseconds-bc9b07ea-4b81-11e9-9bc4-42010a8a0064": Phase="Running", Reason="", readiness=true. Elapsed: 1.652491ms
I0321 02:36:33.714] Mar 21 02:33:44.456: INFO: Pod "pod-update-activedeadlineseconds-bc9b07ea-4b81-11e9-9bc4-42010a8a0064": Phase="Running", Reason="", readiness=true. Elapsed: 2.012588479s
I0321 02:36:33.714] Mar 21 02:33:46.458: INFO: Pod "pod-update-activedeadlineseconds-bc9b07ea-4b81-11e9-9bc4-42010a8a0064": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.014359419s
I0321 02:36:33.714] Mar 21 02:33:46.458: INFO: Pod "pod-update-activedeadlineseconds-bc9b07ea-4b81-11e9-9bc4-42010a8a0064" satisfied condition "terminated due to deadline exceeded"
I0321 02:36:33.714] [AfterEach] [k8s.io] Pods
I0321 02:36:33.715]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0321 02:36:33.715] Mar 21 02:33:46.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0321 02:36:33.715] STEP: Destroying namespace "pods-4441" for this suite.
I0321 02:36:33.715] Mar 21 02:33:52.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 174 lines ...
I0321 02:36:33.734] Mar 21 02:29:06.829: INFO: Skipping waiting for service account
I0321 02:36:33.734] [It] should be able to pull from private registry with credential provider [NodeConformance]
I0321 02:36:33.734]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:69
I0321 02:36:33.734] STEP: create the container
I0321 02:36:33.734] STEP: check the container status
I0321 02:36:33.735] STEP: delete the container
I0321 02:36:33.735] Mar 21 02:34:07.074: INFO: No.1 attempt failed: expected container state: Running, got: "Waiting", retrying...
I0321 02:36:33.735] STEP: create the container
I0321 02:36:33.735] STEP: check the container status
I0321 02:36:33.735] STEP: delete the container
I0321 02:36:33.735] [AfterEach] [k8s.io] Container Runtime Conformance Test
I0321 02:36:33.735]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0321 02:36:33.735] Mar 21 02:34:09.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 50 lines ...
I0321 02:36:33.741]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0321 02:36:33.741] STEP: Creating a kubernetes client
I0321 02:36:33.741] STEP: Building a namespace api object, basename init-container
I0321 02:36:33.741] Mar 21 02:33:18.967: INFO: Skipping waiting for service account
I0321 02:36:33.741] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0321 02:36:33.741]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0321 02:36:33.741] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0321 02:36:33.742]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
I0321 02:36:33.742] STEP: creating the pod
I0321 02:36:33.742] Mar 21 02:33:18.967: INFO: PodSpec: initContainers in spec.initContainers
I0321 02:36:33.746] Mar 21 02:34:03.336: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b0264614-4b81-11e9-abfe-42010a8a0064", GenerateName:"", Namespace:"init-container-2073", SelfLink:"/api/v1/namespaces/init-container-2073/pods/pod-init-b0264614-4b81-11e9-abfe-42010a8a0064", UID:"b026b8d7-4b81-11e9-b1d1-42010a8a0064", ResourceVersion:"2564", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63688732398, loc:(*time.Location)(0xbdcf8e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"967354346"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000c55520), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-37b7fc6c-cos-stable-63-10032-71-0", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000b904e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000c55590)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000c555b0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000c555c0), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000c555c4)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688732398, loc:(*time.Location)(0xbdcf8e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688732398, loc:(*time.Location)(0xbdcf8e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688732398, loc:(*time.Location)(0xbdcf8e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688732398, loc:(*time.Location)(0xbdcf8e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.100", PodIP:"10.100.0.128", StartTime:(*v1.Time)(0xc0006adda0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000235260)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000235340)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"docker://7e841f1ed6c40067a0cf6081c8a83754b46d969ba882f726340c83685f257260"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0006adde0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0006ade00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
I0321 02:36:33.746] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0321 02:36:33.746]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0321 02:36:33.747] Mar 21 02:34:03.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0321 02:36:33.747] STEP: Destroying namespace "init-container-2073" for this suite.
I0321 02:36:33.747] Mar 21 02:34:25.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0321 02:36:33.747] Mar 21 02:34:25.421: INFO: namespace init-container-2073 deletion completed in 22.073397346s
I0321 02:36:33.747] 
I0321 02:36:33.747] 
I0321 02:36:33.747] • [SLOW TEST:66.463 seconds]
I0321 02:36:33.747] [k8s.io] InitContainer [NodeConformance]
I0321 02:36:33.747] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
I0321 02:36:33.748]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0321 02:36:33.748]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
I0321 02:36:33.748] ------------------------------
I0321 02:36:33.748] [BeforeEach] [sig-storage] EmptyDir volumes
I0321 02:36:33.748]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0321 02:36:33.748] STEP: Creating a kubernetes client
I0321 02:36:33.748] STEP: Building a namespace api object, basename emptydir
... skipping 1132 lines ...
I0321 02:36:33.880]     should execute prestop exec hook properly [NodeConformance] [Conformance]
I0321 02:36:33.881]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
I0321 02:36:33.881] ------------------------------
I0321 02:36:33.881] I0321 02:36:25.259566    1302 e2e_node_suite_test.go:186] Stopping node services...
I0321 02:36:33.881] I0321 02:36:25.259595    1302 server.go:258] Kill server "services"
I0321 02:36:33.881] I0321 02:36:25.259606    1302 server.go:295] Killing process 1818 (services) with -TERM
I0321 02:36:33.881] E0321 02:36:25.350838    1302 services.go:89] Failed to stop services: error stopping "services": waitid: no child processes
I0321 02:36:33.881] I0321 02:36:25.350862    1302 server.go:258] Kill server "kubelet"
I0321 02:36:33.881] I0321 02:36:25.360291    1302 services.go:146] Fetching log files...
I0321 02:36:33.882] I0321 02:36:25.360380    1302 services.go:155] Get log file "kern.log" with journalctl command [-k].
I0321 02:36:33.882] I0321 02:36:25.490676    1302 services.go:155] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0321 02:36:33.882] I0321 02:36:26.104850    1302 services.go:155] Get log file "docker.log" with journalctl command [-u docker].
I0321 02:36:33.882] I0321 02:36:26.140248    1302 services.go:155] Get log file "kubelet.log" with journalctl command [-u kubelet-20190321T022645.service].
I0321 02:36:33.882] I0321 02:36:27.189631    1302 e2e_node_suite_test.go:191] Tests Finished
I0321 02:36:33.882] 
I0321 02:36:33.882] 
I0321 02:36:33.882] Ran 156 of 298 Specs in 564.138 seconds
I0321 02:36:33.883] SUCCESS! -- 156 Passed | 0 Failed | 0 Flaked | 0 Pending | 142 Skipped 
I0321 02:36:33.883] 
I0321 02:36:33.883] Ginkgo ran 1 suite in 9m28.209391113s
I0321 02:36:33.883] Test Suite Passed
I0321 02:36:33.883] 
I0321 02:36:33.883] Success Finished Test Suite on Host tmp-node-e2e-37b7fc6c-cos-stable-63-10032-71-0
I0321 02:36:33.883] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
... skipping 127 lines ...
I0321 02:36:39.327] STEP: Creating a kubernetes client
I0321 02:36:39.327] STEP: Building a namespace api object, basename init-container
I0321 02:36:39.327] Mar 21 02:28:08.589: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
I0321 02:36:39.327] Mar 21 02:28:08.589: INFO: Skipping waiting for service account
I0321 02:36:39.328] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0321 02:36:39.328]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0321 02:36:39.328] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0321 02:36:39.328]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
I0321 02:36:39.328] STEP: creating the pod
I0321 02:36:39.328] Mar 21 02:28:08.589: INFO: PodSpec: initContainers in spec.initContainers
I0321 02:36:39.328] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0321 02:36:39.329]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0321 02:36:39.329] Mar 21 02:28:13.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
I0321 02:36:39.329] Mar 21 02:28:19.900: INFO: namespace init-container-2276 deletion completed in 6.071589917s
I0321 02:36:39.329] 
I0321 02:36:39.329] 
I0321 02:36:39.329] • [SLOW TEST:11.396 seconds]
I0321 02:36:39.329] [k8s.io] InitContainer [NodeConformance]
I0321 02:36:39.330] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
I0321 02:36:39.330]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0321 02:36:39.330]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
I0321 02:36:39.330] ------------------------------
I0321 02:36:39.330] S
I0321 02:36:39.330] ------------------------------
I0321 02:36:39.330] [BeforeEach] [k8s.io] Container Runtime
I0321 02:36:39.330]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
... skipping 444 lines ...
I0321 02:36:39.385]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:35
I0321 02:36:39.385] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
I0321 02:36:39.385]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:138
I0321 02:36:39.385] Mar 21 02:28:30.223: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-0402e14e-4b81-11e9-b3b7-42010a8a0063" in namespace "security-context-test-4574" to be "success or failure"
I0321 02:36:39.386] Mar 21 02:28:30.225: INFO: Pod "busybox-readonly-true-0402e14e-4b81-11e9-b3b7-42010a8a0063": Phase="Pending", Reason="", readiness=false. Elapsed: 1.681952ms
I0321 02:36:39.386] Mar 21 02:28:32.228: INFO: Pod "busybox-readonly-true-0402e14e-4b81-11e9-b3b7-42010a8a0063": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00494545s
I0321 02:36:39.386] Mar 21 02:28:34.230: INFO: Pod "busybox-readonly-true-0402e14e-4b81-11e9-b3b7-42010a8a0063": Phase="Failed", Reason="", readiness=false. Elapsed: 4.007094247s
I0321 02:36:39.386] Mar 21 02:28:34.230: INFO: Pod "busybox-readonly-true-0402e14e-4b81-11e9-b3b7-42010a8a0063" satisfied condition "success or failure"
I0321 02:36:39.386] [AfterEach] [k8s.io] Security Context
I0321 02:36:39.386]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0321 02:36:39.386] Mar 21 02:28:34.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0321 02:36:39.386] STEP: Destroying namespace "security-context-test-4574" for this suite.
I0321 02:36:39.387] Mar 21 02:28:40.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 324 lines ...
I0321 02:36:39.424] STEP: Creating a kubernetes client
I0321 02:36:39.424] STEP: Building a namespace api object, basename container-runtime
I0321 02:36:39.425] Mar 21 02:28:52.541: INFO: Skipping waiting for service account
I0321 02:36:39.425] [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance]
I0321 02:36:39.425]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:191
I0321 02:36:39.425] STEP: create the container
I0321 02:36:39.425] STEP: wait for the container to reach Failed
I0321 02:36:39.425] STEP: get the container status
I0321 02:36:39.425] STEP: the container should be terminated
I0321 02:36:39.425] STEP: the termination message should be set
I0321 02:36:39.425] Mar 21 02:28:53.552: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
I0321 02:36:39.425] STEP: delete the container
I0321 02:36:39.426] [AfterEach] [k8s.io] Container Runtime
... skipping 2915 lines ...
I0321 02:36:39.764] STEP: submitting the pod to kubernetes
I0321 02:36:39.764] STEP: verifying the pod is in kubernetes
I0321 02:36:39.764] STEP: updating the pod
I0321 02:36:39.764] Mar 21 02:33:48.476: INFO: Successfully updated pod "pod-update-activedeadlineseconds-c034fba2-4b81-11e9-9202-42010a8a0063"
I0321 02:36:39.764] Mar 21 02:33:48.476: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-c034fba2-4b81-11e9-9202-42010a8a0063" in namespace "pods-6672" to be "terminated due to deadline exceeded"
I0321 02:36:39.764] Mar 21 02:33:48.478: INFO: Pod "pod-update-activedeadlineseconds-c034fba2-4b81-11e9-9202-42010a8a0063": Phase="Running", Reason="", readiness=true. Elapsed: 1.975704ms
I0321 02:36:39.764] Mar 21 02:33:50.480: INFO: Pod "pod-update-activedeadlineseconds-c034fba2-4b81-11e9-9202-42010a8a0063": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.003998492s
I0321 02:36:39.765] Mar 21 02:33:50.480: INFO: Pod "pod-update-activedeadlineseconds-c034fba2-4b81-11e9-9202-42010a8a0063" satisfied condition "terminated due to deadline exceeded"
I0321 02:36:39.765] [AfterEach] [k8s.io] Pods
I0321 02:36:39.765]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0321 02:36:39.765] Mar 21 02:33:50.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0321 02:36:39.765] STEP: Destroying namespace "pods-6672" for this suite.
I0321 02:36:39.765] Mar 21 02:33:56.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 184 lines ...
I0321 02:36:39.786] Mar 21 02:29:05.286: INFO: Skipping waiting for service account
I0321 02:36:39.786] [It] should be able to pull from private registry with credential provider [NodeConformance]
I0321 02:36:39.787]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:69
I0321 02:36:39.787] STEP: create the container
I0321 02:36:39.787] STEP: check the container status
I0321 02:36:39.787] STEP: delete the container
I0321 02:36:39.787] Mar 21 02:34:05.935: INFO: No.1 attempt failed: expected container state: Running, got: "Waiting", retrying...
I0321 02:36:39.787] STEP: create the container
I0321 02:36:39.787] STEP: check the container status
I0321 02:36:39.787] STEP: delete the container
I0321 02:36:39.787] [AfterEach] [k8s.io] Container Runtime Conformance Test
I0321 02:36:39.787]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0321 02:36:39.788] Mar 21 02:34:08.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 112 lines ...
I0321 02:36:39.805]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0321 02:36:39.805] STEP: Creating a kubernetes client
I0321 02:36:39.805] STEP: Building a namespace api object, basename init-container
I0321 02:36:39.805] Mar 21 02:33:28.964: INFO: Skipping waiting for service account
I0321 02:36:39.805] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0321 02:36:39.805]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0321 02:36:39.806] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0321 02:36:39.806]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
I0321 02:36:39.806] STEP: creating the pod
I0321 02:36:39.806] Mar 21 02:33:28.964: INFO: PodSpec: initContainers in spec.initContainers
I0321 02:36:39.810] Mar 21 02:34:14.539: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b61bc107-4b81-11e9-8f18-42010a8a0063", GenerateName:"", Namespace:"init-container-7780", SelfLink:"/api/v1/namespaces/init-container-7780/pods/pod-init-b61bc107-4b81-11e9-8f18-42010a8a0063", UID:"b623d028-4b81-11e9-81f1-42010a8a0063", ResourceVersion:"2585", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63688732409, loc:(*time.Location)(0xbdcf8e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"964740489"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0006db070), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-37b7fc6c-cos-stable-60-9592-84-0", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00083b380), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006db0e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006db100)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0006db110), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0006db114)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688732409, loc:(*time.Location)(0xbdcf8e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688732409, loc:(*time.Location)(0xbdcf8e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688732409, loc:(*time.Location)(0xbdcf8e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688732409, loc:(*time.Location)(0xbdcf8e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.99", PodIP:"10.100.0.127", StartTime:(*v1.Time)(0xc000fc5f60), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0005736c0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000573730)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"docker://c955b54086f9e733b263bcfcd7df8a49c4a149d98da91d1d3480010c5551fa51"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000fc5f80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000fc5fa0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
I0321 02:36:39.811] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0321 02:36:39.811]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0321 02:36:39.811] Mar 21 02:34:14.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0321 02:36:39.811] STEP: Destroying namespace "init-container-7780" for this suite.
I0321 02:36:39.811] Mar 21 02:34:36.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0321 02:36:39.811] Mar 21 02:34:36.624: INFO: namespace init-container-7780 deletion completed in 22.082645479s
I0321 02:36:39.812] 
I0321 02:36:39.812] 
I0321 02:36:39.812] • [SLOW TEST:67.663 seconds]
I0321 02:36:39.812] [k8s.io] InitContainer [NodeConformance]
I0321 02:36:39.812] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
I0321 02:36:39.812]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0321 02:36:39.812]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
I0321 02:36:39.812] ------------------------------
I0321 02:36:39.812] S
I0321 02:36:39.812] ------------------------------
I0321 02:36:39.812] [BeforeEach] [sig-network] Networking
I0321 02:36:39.813]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
... skipping 1107 lines ...
I0321 02:36:40.014]     should execute prestop exec hook properly [NodeConformance] [Conformance]
I0321 02:36:40.015]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
I0321 02:36:40.015] ------------------------------
I0321 02:36:40.015] I0321 02:36:32.110635    1283 e2e_node_suite_test.go:186] Stopping node services...
I0321 02:36:40.015] I0321 02:36:32.110680    1283 server.go:258] Kill server "services"
I0321 02:36:40.015] I0321 02:36:32.110714    1283 server.go:295] Killing process 1822 (services) with -TERM
I0321 02:36:40.016] E0321 02:36:32.177461    1283 services.go:89] Failed to stop services: error stopping "services": waitid: no child processes
I0321 02:36:40.016] I0321 02:36:32.177480    1283 server.go:258] Kill server "kubelet"
I0321 02:36:40.016] I0321 02:36:32.187142    1283 services.go:146] Fetching log files...
I0321 02:36:40.016] I0321 02:36:32.187207    1283 services.go:155] Get log file "kern.log" with journalctl command [-k].
I0321 02:36:40.016] I0321 02:36:32.287114    1283 services.go:155] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0321 02:36:40.017] I0321 02:36:32.743542    1283 services.go:155] Get log file "docker.log" with journalctl command [-u docker].
I0321 02:36:40.017] I0321 02:36:32.769569    1283 services.go:155] Get log file "kubelet.log" with journalctl command [-u kubelet-20190321T022645.service].
I0321 02:36:40.017] I0321 02:36:33.561425    1283 e2e_node_suite_test.go:191] Tests Finished
I0321 02:36:40.017] 
I0321 02:36:40.017] 
I0321 02:36:40.018] Ran 156 of 298 Specs in 571.177 seconds
I0321 02:36:40.018] SUCCESS! -- 156 Passed | 0 Failed | 0 Flaked | 0 Pending | 142 Skipped 
I0321 02:36:40.018] 
I0321 02:36:40.018] Ginkgo ran 1 suite in 9m34.729525192s
I0321 02:36:40.018] Test Suite Passed
I0321 02:36:40.018] 
I0321 02:36:40.018] Success Finished Test Suite on Host tmp-node-e2e-37b7fc6c-cos-stable-60-9592-84-0
I0321 02:36:40.019] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
... skipping 525 lines ...
I0321 02:39:33.584] STEP: submitting the pod to kubernetes
I0321 02:39:33.584] STEP: verifying the pod is in kubernetes
I0321 02:39:33.584] STEP: updating the pod
I0321 02:39:33.584] Mar 21 02:28:55.440: INFO: Successfully updated pod "pod-update-activedeadlineseconds-104ce0c3-4b81-11e9-a950-42010a8a0065"
I0321 02:39:33.585] Mar 21 02:28:55.440: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-104ce0c3-4b81-11e9-a950-42010a8a0065" in namespace "pods-5132" to be "terminated due to deadline exceeded"
I0321 02:39:33.585] Mar 21 02:28:55.462: INFO: Pod "pod-update-activedeadlineseconds-104ce0c3-4b81-11e9-a950-42010a8a0065": Phase="Running", Reason="", readiness=true. Elapsed: 21.794482ms
I0321 02:39:33.585] Mar 21 02:28:57.464: INFO: Pod "pod-update-activedeadlineseconds-104ce0c3-4b81-11e9-a950-42010a8a0065": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.023867998s
I0321 02:39:33.585] Mar 21 02:28:57.464: INFO: Pod "pod-update-activedeadlineseconds-104ce0c3-4b81-11e9-a950-42010a8a0065" satisfied condition "terminated due to deadline exceeded"
I0321 02:39:33.585] [AfterEach] [k8s.io] Pods
I0321 02:39:33.585]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0321 02:39:33.585] Mar 21 02:28:57.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0321 02:39:33.586] STEP: Destroying namespace "pods-5132" for this suite.
I0321 02:39:33.586] Mar 21 02:29:03.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 310 lines ...
I0321 02:39:33.619] STEP: Creating a kubernetes client
I0321 02:39:33.619] STEP: Building a namespace api object, basename container-runtime
I0321 02:39:33.619] Mar 21 02:29:19.965: INFO: Skipping waiting for service account
I0321 02:39:33.619] [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance]
I0321 02:39:33.620]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:191
I0321 02:39:33.620] STEP: create the container
I0321 02:39:33.620] STEP: wait for the container to reach Failed
I0321 02:39:33.620] STEP: get the container status
I0321 02:39:33.620] STEP: the container should be terminated
I0321 02:39:33.620] STEP: the termination message should be set
I0321 02:39:33.620] Mar 21 02:29:22.007: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
I0321 02:39:33.620] STEP: delete the container
I0321 02:39:33.620] [AfterEach] [k8s.io] Container Runtime
... skipping 2765 lines ...
I0321 02:39:34.003]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0321 02:39:34.003] STEP: Creating a kubernetes client
I0321 02:39:34.003] STEP: Building a namespace api object, basename init-container
I0321 02:39:34.003] Mar 21 02:34:01.902: INFO: Skipping waiting for service account
I0321 02:39:34.003] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0321 02:39:34.004]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0321 02:39:34.004] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0321 02:39:34.004]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
I0321 02:39:34.004] STEP: creating the pod
I0321 02:39:34.004] Mar 21 02:34:01.902: INFO: PodSpec: initContainers in spec.initContainers
I0321 02:39:34.004] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0321 02:39:34.005]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0321 02:39:34.005] Mar 21 02:34:04.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
I0321 02:39:34.005] Mar 21 02:34:10.287: INFO: namespace init-container-9736 deletion completed in 6.15347401s
I0321 02:39:34.005] 
I0321 02:39:34.005] 
I0321 02:39:34.005] • [SLOW TEST:8.388 seconds]
I0321 02:39:34.005] [k8s.io] InitContainer [NodeConformance]
I0321 02:39:34.006] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
I0321 02:39:34.006]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0321 02:39:34.006]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
I0321 02:39:34.006] ------------------------------
I0321 02:39:34.006] [BeforeEach] [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv]
I0321 02:39:34.006]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/gke_environment_test.go:315
I0321 02:39:34.006] Mar 21 02:34:10.289: INFO: Skipped because system spec name "" is not in [gke]
I0321 02:39:34.006] 
... skipping 47 lines ...
I0321 02:39:34.013] [BeforeEach] [k8s.io] Security Context
I0321 02:39:34.013]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:35
I0321 02:39:34.013] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
I0321 02:39:34.014]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:138
I0321 02:39:34.014] Mar 21 02:34:09.819: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-ce6d0899-4b81-11e9-85c7-42010a8a0065" in namespace "security-context-test-7957" to be "success or failure"
I0321 02:39:34.014] Mar 21 02:34:09.820: INFO: Pod "busybox-readonly-true-ce6d0899-4b81-11e9-85c7-42010a8a0065": Phase="Pending", Reason="", readiness=false. Elapsed: 1.428987ms
I0321 02:39:34.014] Mar 21 02:34:11.822: INFO: Pod "busybox-readonly-true-ce6d0899-4b81-11e9-85c7-42010a8a0065": Phase="Failed", Reason="", readiness=false. Elapsed: 2.003315906s
I0321 02:39:34.014] Mar 21 02:34:11.822: INFO: Pod "busybox-readonly-true-ce6d0899-4b81-11e9-85c7-42010a8a0065" satisfied condition "success or failure"
I0321 02:39:34.015] [AfterEach] [k8s.io] Security Context
I0321 02:39:34.015]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0321 02:39:34.015] Mar 21 02:34:11.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0321 02:39:34.015] STEP: Destroying namespace "security-context-test-7957" for this suite.
I0321 02:39:34.015] Mar 21 02:34:17.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 128 lines ...
I0321 02:39:34.033] Mar 21 02:29:16.038: INFO: Skipping waiting for service account
I0321 02:39:34.033] [It] should be able to pull from private registry with credential provider [NodeConformance]
I0321 02:39:34.033]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:69
I0321 02:39:34.033] STEP: create the container
I0321 02:39:34.033] STEP: check the container status
I0321 02:39:34.033] STEP: delete the container
I0321 02:39:34.033] Mar 21 02:34:16.149: INFO: No.1 attempt failed: expected container state: Running, got: "Waiting", retrying...
I0321 02:39:34.034] STEP: create the container
I0321 02:39:34.034] STEP: check the container status
I0321 02:39:34.034] STEP: delete the container
I0321 02:39:34.034] [AfterEach] [k8s.io] Container Runtime Conformance Test
I0321 02:39:34.034]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0321 02:39:34.034] Mar 21 02:34:18.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 288 lines ...
I0321 02:39:34.073]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0321 02:39:34.073] STEP: Creating a kubernetes client
I0321 02:39:34.073] STEP: Building a namespace api object, basename init-container
I0321 02:39:34.073] Mar 21 02:33:48.452: INFO: Skipping waiting for service account
I0321 02:39:34.073] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0321 02:39:34.073]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0321 02:39:34.073] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0321 02:39:34.074]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
I0321 02:39:34.074] STEP: creating the pod
I0321 02:39:34.074] Mar 21 02:33:48.460: INFO: PodSpec: initContainers in spec.initContainers
I0321 02:39:34.078] Mar 21 02:34:28.311: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c1b94990-4b81-11e9-bcb5-42010a8a0065", GenerateName:"", Namespace:"init-container-6400", SelfLink:"/api/v1/namespaces/init-container-6400/pods/pod-init-c1b94990-4b81-11e9-bcb5-42010a8a0065", UID:"c1c3b9f0-4b81-11e9-8e9b-42010a8a0065", ResourceVersion:"2543", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63688732428, loc:(*time.Location)(0xbdcf8e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"452088609"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0010b2910), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-37b7fc6c-coreos-beta-1883-1-0-v20180911", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0011b9680), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0010b2a00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0010b2a40)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0010b2a50), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0010b2a54)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688732428, loc:(*time.Location)(0xbdcf8e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688732428, loc:(*time.Location)(0xbdcf8e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688732428, loc:(*time.Location)(0xbdcf8e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688732428, loc:(*time.Location)(0xbdcf8e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.101", PodIP:"10.100.0.124", StartTime:(*v1.Time)(0xc0006ff400), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0008002a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000800310)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://cdfecd66f541cd946ed5455b0d555e56c950ffbcad570ebefc82780d4c5a25f4"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0006ff420), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0006ff440), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
I0321 02:39:34.078] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0321 02:39:34.078]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0321 02:39:34.078] Mar 21 02:34:28.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0321 02:39:34.079] STEP: Destroying namespace "init-container-6400" for this suite.
I0321 02:39:34.079] Mar 21 02:34:50.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0321 02:39:34.079] Mar 21 02:34:50.371: INFO: namespace init-container-6400 deletion completed in 22.054096504s
I0321 02:39:34.079] 
I0321 02:39:34.079] 
I0321 02:39:34.079] • [SLOW TEST:61.935 seconds]
I0321 02:39:34.079] [k8s.io] InitContainer [NodeConformance]
I0321 02:39:34.079] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
I0321 02:39:34.079]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0321 02:39:34.079]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
I0321 02:39:34.080] ------------------------------
I0321 02:39:34.080] [BeforeEach] [sig-storage] Downward API volume
I0321 02:39:34.080]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0321 02:39:34.080] STEP: Creating a kubernetes client
I0321 02:39:34.080] STEP: Building a namespace api object, basename downward-api
... skipping 1148 lines ...
I0321 02:39:34.252] Mar 21 02:34:17.897: INFO: Skipping waiting for service account
I0321 02:39:34.252] [It] should not be able to pull from private registry without secret [NodeConformance]
I0321 02:39:34.252]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:367
I0321 02:39:34.252] STEP: create the container
I0321 02:39:34.252] STEP: check the container status
I0321 02:39:34.253] STEP: delete the container
I0321 02:39:34.253] Mar 21 02:39:18.778: INFO: No.1 attempt failed: expected container state: Waiting, got: "Running", retrying...
I0321 02:39:34.253] STEP: create the container
I0321 02:39:34.253] STEP: check the container status
I0321 02:39:34.253] STEP: delete the container
I0321 02:39:34.253] [AfterEach] [k8s.io] Container Runtime
I0321 02:39:34.253]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0321 02:39:34.253] Mar 21 02:39:20.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 16 lines ...
I0321 02:39:34.255] I0321 02:39:26.954878    1309 server.go:258] Kill server "services"
I0321 02:39:34.255] I0321 02:39:26.954890    1309 server.go:295] Killing process 2088 (services) with -TERM
I0321 02:39:34.255] I0321 02:39:27.167567    1309 server.go:258] Kill server "kubelet"
I0321 02:39:34.256] I0321 02:39:27.177382    1309 services.go:146] Fetching log files...
I0321 02:39:34.256] I0321 02:39:27.177465    1309 services.go:155] Get log file "kern.log" with journalctl command [-k].
I0321 02:39:34.256] I0321 02:39:27.389230    1309 services.go:155] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0321 02:39:34.256] E0321 02:39:27.394292    1309 services.go:158] failed to get "cloud-init.log" from journald: Failed to add filter for units: No data available
I0321 02:39:34.256] , exit status 1
I0321 02:39:34.256] I0321 02:39:27.394329    1309 services.go:155] Get log file "docker.log" with journalctl command [-u docker].
I0321 02:39:34.257] I0321 02:39:27.406564    1309 services.go:155] Get log file "kubelet.log" with journalctl command [-u kubelet-20190321T022645.service].
I0321 02:39:34.257] I0321 02:39:27.422896    1309 e2e_node_suite_test.go:191] Tests Finished
I0321 02:39:34.257] 
I0321 02:39:34.257] 
I0321 02:39:34.257] Ran 156 of 296 Specs in 746.360 seconds
I0321 02:39:34.257] SUCCESS! -- 156 Passed | 0 Failed | 0 Flaked | 0 Pending | 140 Skipped 
I0321 02:39:34.257] 
I0321 02:39:34.257] Ginkgo ran 1 suite in 12m28.383051548s
I0321 02:39:34.257] Test Suite Passed
I0321 02:39:34.257] 
I0321 02:39:34.258] Success Finished Test Suite on Host tmp-node-e2e-37b7fc6c-coreos-beta-1883-1-0-v20180911
I0321 02:39:34.258] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
... skipping 5 lines ...
W0321 02:39:34.360] 2019/03/21 02:39:34 process.go:155: Step 'go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml' finished in 17m17.683886682s
W0321 02:39:34.360] 2019/03/21 02:39:34 node.go:42: Noop - Node DumpClusterLogs() - /workspace/_artifacts: 
W0321 02:39:34.360] 2019/03/21 02:39:34 node.go:52: Noop - Node Down()
W0321 02:39:34.379] 2019/03/21 02:39:34 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0321 02:39:34.380] 2019/03/21 02:39:34 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"
W0321 02:39:34.751] 2019/03/21 02:39:34 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 370.928729ms
W0321 02:39:34.752] 2019/03/21 02:39:34 main.go:307: Something went wrong: encountered 1 errors: [error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml: exit status 1]
W0321 02:39:34.755] Traceback (most recent call last):
W0321 02:39:34.755]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 764, in <module>
W0321 02:39:34.755]     main(parse_args())
W0321 02:39:34.755]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 615, in main
W0321 02:39:34.755]     mode.start(runner_args)
W0321 02:39:34.756]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0321 02:39:34.756]     check_env(env, self.command, *args)
W0321 02:39:34.756]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0321 02:39:34.756]     subprocess.check_call(cmd, env=env)
W0321 02:39:34.756]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0321 02:39:34.756]     raise CalledProcessError(retcode, cmd)
W0321 02:39:34.757] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--deployment=node', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--gcp-project=k8s-jkns-pr-node-e2e', '--gcp-zone=us-west1-b', '--node-test-args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/"', '--node-tests=true', '--test_args=--nodes=8 --focus="\\[NodeConformance\\]" --skip="\\[Flaky\\]|\\[Slow\\]|\\[Serial\\]" --flakeAttempts=2', '--timeout=65m', '--node-args=--image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml')' returned non-zero exit status 1
E0321 02:39:34.767] Command failed
I0321 02:39:34.768] process 491 exited with code 1 after 17.3m
E0321 02:39:34.768] FAIL: pull-kubernetes-node-e2e
I0321 02:39:34.768] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0321 02:39:35.306] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0321 02:39:35.375] process 39081 exited with code 0 after 0.0m
I0321 02:39:35.376] Call:  gcloud config get-value account
I0321 02:39:35.705] process 39093 exited with code 0 after 0.0m
I0321 02:39:35.705] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0321 02:39:35.705] Upload result and artifacts...
I0321 02:39:35.705] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/75474/pull-kubernetes-node-e2e/123825
I0321 02:39:35.706] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/75474/pull-kubernetes-node-e2e/123825/artifacts
W0321 02:39:36.887] CommandException: One or more URLs matched no objects.
E0321 02:39:37.058] Command failed
I0321 02:39:37.059] process 39105 exited with code 1 after 0.0m
W0321 02:39:37.059] Remote dir gs://kubernetes-jenkins/pr-logs/pull/75474/pull-kubernetes-node-e2e/123825/artifacts not exist yet
I0321 02:39:37.059] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/75474/pull-kubernetes-node-e2e/123825/artifacts
I0321 02:39:40.504] process 39247 exited with code 0 after 0.1m
I0321 02:39:40.505] Call:  git rev-parse HEAD
I0321 02:39:40.509] process 39890 exited with code 0 after 0.0m
... skipping 21 lines ...