This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-01-07 00:15
Elapsed2h38m
Revisionmaster
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/c02e754d-7251-427a-a9b9-ba7ed2c547f1/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/c02e754d-7251-427a-a9b9-ba7ed2c547f1/targets/test

No Test Failures!


Error lines from build-log.txt

starting docker...............................................................................................................................................................................................................................................................................................................failed
time="2020-01-07T00:15:56.382667630Z" level=info msg="libcontainerd: started new docker-containerd process" pid=118
time="2020-01-07T00:15:56.392206173Z" level=info msg="parsed scheme: \"unix\"" module=grpc
time="2020-01-07T00:15:56.392224143Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
time="2020-01-07T00:15:56.393363317Z" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/docker-containerd.sock 0  <nil>}]" module=grpc
time="2020-01-07T00:15:56.393412749Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
time="2020-01-07T00:15:56.393543900Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4201d5240, CONNECTING" module=grpc
time="2020-01-07T00:15:57Z" level=info msg="starting containerd" revision=468a545b9edcd5932818eb9de8e72413e616e86e version=v1.1.2 
time="2020-01-07T00:15:57Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1 
time="2020-01-07T00:15:57Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1 
time="2020-01-07T00:15:57Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /docker-graph/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" 
time="2020-01-07T00:15:57Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1 
time="2020-01-07T00:15:57Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "": exec: "modprobe": executable file not found in $PATH" 
time="2020-01-07T00:15:57Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1 
time="2020-01-07T00:15:57Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1 
time="2020-01-07T00:15:57Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1 
time="2020-01-07T00:15:57Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /docker-graph/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter" 
time="2020-01-07T00:15:57Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1 
time="2020-01-07T00:15:57Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /docker-graph/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" 
time="2020-01-07T00:15:57Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "": exec: "modprobe": executable file not found in $PATH" 
time="2020-01-07T00:15:57Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /docker-graph/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter" 
time="2020-01-07T00:15:57Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1 
time="2020-01-07T00:15:57Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1 
time="2020-01-07T00:15:57Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1 
time="2020-01-07T00:15:57Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1 
time="2020-01-07T00:15:57Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1 
time="2020-01-07T00:15:57Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1 
... skipping 30 lines ...
time="2020-01-07T00:15:57.355816255Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
time="2020-01-07T00:15:57.355913110Z" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/docker-containerd.sock 0  <nil>}]" module=grpc
time="2020-01-07T00:15:57.355954194Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
time="2020-01-07T00:15:57.356043186Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4204175f0, CONNECTING" module=grpc
time="2020-01-07T00:15:57.370813480Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4204175f0, READY" module=grpc
time="2020-01-07T00:15:57.372583899Z" level=info msg="Loading containers: start."
time="2020-01-07T00:15:57.375723101Z" level=warning msg="Running modprobe bridge br_netfilter failed with message: , error: exec: \"modprobe\": executable file not found in $PATH"
time="2020-01-07T00:15:57.375818538Z" level=warning msg="Running modprobe nf_nat failed with message: ``, error: exec: \"modprobe\": executable file not found in $PATH"
time="2020-01-07T00:15:57.375890211Z" level=warning msg="Running modprobe xt_conntrack failed with message: ``, error: exec: \"modprobe\": executable file not found in $PATH"
Error starting daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: iptables: Operation not supported.
 (exit status 1)
hack/images/ci/conformance.sh
external cluster access enabled
one-time TLS CA generation enabled

Initializing the backend...
... skipping 791 lines ...
Jan  7 00:28:13.251: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Jan  7 00:28:13.257: INFO: e2e test version: v1.13.12
Jan  7 00:28:13.258: INFO: kube-apiserver version: v1.13.12
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 00:28:13.258: INFO: >>> kubeConfig: /tmp/kubeconfig-879701837
STEP: Building a namespace api object, basename daemonsets
Jan  7 00:28:13.309: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  7 00:28:13.338: INFO: Number of nodes with available pods: 0
Jan  7 00:28:13.338: INFO: Node c01.00bac5a.sk8 is running more than one daemon pod
Jan  7 00:28:14.354: INFO: Number of nodes with available pods: 0
... skipping 19 lines ...
Jan  7 00:28:24.346: INFO: Number of nodes with available pods: 4
Jan  7 00:28:24.346: INFO: Node c02.00bac5a.sk8 is running more than one daemon pod
Jan  7 00:28:25.346: INFO: Number of nodes with available pods: 4
Jan  7 00:28:25.346: INFO: Node c02.00bac5a.sk8 is running more than one daemon pod
Jan  7 00:28:26.346: INFO: Number of nodes with available pods: 5
Jan  7 00:28:26.346: INFO: Number of running nodes: 5, number of available pods: 5
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan  7 00:28:26.365: INFO: Number of nodes with available pods: 5
Jan  7 00:28:26.365: INFO: Number of running nodes: 5, number of available pods: 5
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-r2sbv, will wait for the garbage collector to delete the pods
Jan  7 00:28:27.438: INFO: Deleting DaemonSet.extensions daemon-set took: 7.864465ms
Jan  7 00:28:27.538: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.396283ms
... skipping 39 lines ...
.
.
.
.
.
.
Jan  7 00:48:27.538: INFO: ERROR: Pod "daemon-set-trg78" still exists. Node: "c02.00bac5a.sk8"
Jan  7 00:48:27.538: INFO: Unexpected error occurred: error while waiting for pods gone daemon-set: there are 1 pods left. E.g. "daemon-set-trg78" on node "c02.00bac5a.sk8"
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
STEP: Collecting events from namespace "e2e-tests-daemonsets-r2sbv".
STEP: Found 45 events.
Jan  7 00:48:27.544: INFO: At 2020-01-07 00:28:13 +0000 UTC - event for daemon-set: {daemonset-controller } SuccessfulCreate: Created pod: daemon-set-z7s8q
Jan  7 00:48:27.544: INFO: At 2020-01-07 00:28:13 +0000 UTC - event for daemon-set: {daemonset-controller } SuccessfulCreate: Created pod: daemon-set-8lz7h
... skipping 23 lines ...
Jan  7 00:48:27.544: INFO: At 2020-01-07 00:28:17 +0000 UTC - event for daemon-set-8lz7h: {kubelet w02.00bac5a.sk8} Started: Started container
Jan  7 00:48:27.544: INFO: At 2020-01-07 00:28:17 +0000 UTC - event for daemon-set-hbbm7: {kubelet w01.00bac5a.sk8} Started: Started container
Jan  7 00:48:27.544: INFO: At 2020-01-07 00:28:25 +0000 UTC - event for daemon-set-trg78: {kubelet c02.00bac5a.sk8} Started: Started container
Jan  7 00:48:27.544: INFO: At 2020-01-07 00:28:25 +0000 UTC - event for daemon-set-trg78: {kubelet c02.00bac5a.sk8} Created: Created container
Jan  7 00:48:27.544: INFO: At 2020-01-07 00:28:25 +0000 UTC - event for daemon-set-trg78: {kubelet c02.00bac5a.sk8} Pulled: Successfully pulled image "docker.io/library/nginx:1.14-alpine"
Jan  7 00:48:27.544: INFO: At 2020-01-07 00:28:26 +0000 UTC - event for daemon-set: {daemonset-controller } SuccessfulDelete: Deleted pod: daemon-set-8lz7h
Jan  7 00:48:27.544: INFO: At 2020-01-07 00:28:26 +0000 UTC - event for daemon-set: {daemonset-controller } FailedDaemonPod: Found failed daemon pod e2e-tests-daemonsets-r2sbv/daemon-set-8lz7h on node w02.00bac5a.sk8, will try to kill it
Jan  7 00:48:27.544: INFO: At 2020-01-07 00:28:26 +0000 UTC - event for daemon-set: {daemonset-controller } SuccessfulCreate: Created pod: daemon-set-rh4vq
Jan  7 00:48:27.544: INFO: At 2020-01-07 00:28:26 +0000 UTC - event for daemon-set-8lz7h: {kubelet w02.00bac5a.sk8} Killing: Killing container with id containerd://app:Need to kill Pod
Jan  7 00:48:27.544: INFO: At 2020-01-07 00:28:26 +0000 UTC - event for daemon-set-rh4vq: {default-scheduler } Scheduled: Successfully assigned e2e-tests-daemonsets-r2sbv/daemon-set-rh4vq to w02.00bac5a.sk8
Jan  7 00:48:27.544: INFO: At 2020-01-07 00:28:26 +0000 UTC - event for daemon-set-rh4vq: {kubelet w02.00bac5a.sk8} Started: Started container
Jan  7 00:48:27.544: INFO: At 2020-01-07 00:28:26 +0000 UTC - event for daemon-set-rh4vq: {kubelet w02.00bac5a.sk8} Created: Created container
Jan  7 00:48:27.544: INFO: At 2020-01-07 00:28:26 +0000 UTC - event for daemon-set-rh4vq: {kubelet w02.00bac5a.sk8} Pulled: Container image "docker.io/library/nginx:1.14-alpine" already present on machine
Jan  7 00:48:27.544: INFO: At 2020-01-07 00:28:27 +0000 UTC - event for daemon-set-hbbm7: {kubelet w01.00bac5a.sk8} Killing: Killing container with id containerd://app:Need to kill Pod
Jan  7 00:48:27.544: INFO: At 2020-01-07 00:28:27 +0000 UTC - event for daemon-set-t442b: {kubelet c01.00bac5a.sk8} Killing: Killing container with id containerd://app:Need to kill Pod
Jan  7 00:48:27.544: INFO: At 2020-01-07 00:28:27 +0000 UTC - event for daemon-set-z7s8q: {kubelet w03.00bac5a.sk8} Killing: Killing container with id containerd://app:Need to kill Pod
Jan  7 00:48:27.544: INFO: At 2020-01-07 00:28:28 +0000 UTC - event for daemon-set-trg78: {kubelet c02.00bac5a.sk8} FailedKillPod: error killing pod: failed to "KillPodSandbox" for "970a1ba7-30e4-11ea-bbab-005056b032e6" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"0ac61f1cbf5c203c9ff33059e15db123369e8ffe094c56c6229f09c3a9589ddb\": unknown FS magic on \"/var/run/netns/cni-1d1b2c4e-6118-2d51-3c8f-bd6ba68c9097\": 1021994"

Jan  7 00:48:27.544: INFO: At 2020-01-07 00:28:28 +0000 UTC - event for daemon-set-trg78: {kubelet c02.00bac5a.sk8} FailedKillPod: error killing pod: failed to "KillPodSandbox" for "970a1ba7-30e4-11ea-bbab-005056b032e6" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to remove network namespace for sandbox \"0ac61f1cbf5c203c9ff33059e15db123369e8ffe094c56c6229f09c3a9589ddb\": failed to close network namespace: Failed to clean up namespace /var/run/netns/cni-1d1b2c4e-6118-2d51-3c8f-bd6ba68c9097: remove /var/run/netns/cni-1d1b2c4e-6118-2d51-3c8f-bd6ba68c9097: device or resource busy"

Jan  7 00:48:27.544: INFO: At 2020-01-07 00:28:28 +0000 UTC - event for daemon-set-trg78: {kubelet c02.00bac5a.sk8} Killing: Killing container with id containerd://app:Need to kill Pod
Jan  7 00:48:27.544: INFO: At 2020-01-07 00:28:29 +0000 UTC - event for daemon-set-rh4vq: {kubelet w02.00bac5a.sk8} Killing: Killing container with id containerd://app:Need to kill Pod
Jan  7 00:48:27.550: INFO: POD                                                      NODE             PHASE    GRACE  CONDITIONS
Jan  7 00:48:27.550: INFO: daemon-set-trg78                                         c02.00bac5a.sk8  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 00:28:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 00:28:28 +0000 UTC ContainersNotReady containers with unready status: [app]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 00:28:28 +0000 UTC ContainersNotReady containers with unready status: [app]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 00:28:13 +0000 UTC  }]
Jan  7 00:48:27.550: INFO: kube-dns-6ccd496668-vmrql                                w01.00bac5a.sk8  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 00:26:57 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 00:27:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 00:27:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 00:26:57 +0000 UTC  }]
... skipping 111 lines ...
Jan  7 00:58:27.972: INFO: 
Jan  7 00:58:27.972: INFO: Couldn't delete ns: "e2e-tests-daemonsets-r2sbv": namespace e2e-tests-daemonsets-r2sbv was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace e2e-tests-daemonsets-r2sbv was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})

• Failure in Spec Teardown (AfterEach) [1814.715 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance] [AfterEach]
  /workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Expected error:
      <*errors.errorString | 0xc0018c2420>: {
          s: "error while waiting for pods gone daemon-set: there are 1 pods left. E.g. \"daemon-set-trg78\" on node \"c02.00bac5a.sk8\"",
      }
      error while waiting for pods gone daemon-set: there are 1 pods left. E.g. "daemon-set-trg78" on node "c02.00bac5a.sk8"
  not to have occurred

  /workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:75
------------------------------
SSS
------------------------------
... skipping 780 lines ...
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-5dfd5
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5dfd5 to expose endpoints map[]
Jan  7 01:32:41.613: INFO: Get endpoints failed (2.923305ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan  7 01:32:42.616: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5dfd5 exposes endpoints map[] (1.00594202s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-5dfd5
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5dfd5 to expose endpoints map[pod1:[80]]
Jan  7 01:32:43.637: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5dfd5 exposes endpoints map[pod1:[80]] (1.012300599s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-5dfd5
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5dfd5 to expose endpoints map[pod2:[80] pod1:[80]]
... skipping 882 lines ...
[sig-network] DNS
/workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 01:39:34.568: INFO: >>> kubeConfig: /tmp/kubeconfig-879701837
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  7 01:39:34.617: INFO: PodSpec: initContainers in spec.initContainers
Jan  7 01:40:23.494: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-8ee886b9-30ee-11ea-80c5-e2b8477c9fef", GenerateName:"", Namespace:"e2e-tests-init-container-6wr5j", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-6wr5j/pods/pod-init-8ee886b9-30ee-11ea-80c5-e2b8477c9fef", UID:"8ee8e66b-30ee-11ea-ab58-005056b08c0e", ResourceVersion:"9845", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713957974, loc:(*time.Location)(0x797dc20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"617468440"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-m4rh2", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002400840), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-m4rh2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-m4rh2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-m4rh2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0022acb88), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"w01.00bac5a.sk8", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001ac2660), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022acc10)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022acc30)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0022acc38), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0022acc3c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713957974, loc:(*time.Location)(0x797dc20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713957974, loc:(*time.Location)(0x797dc20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713957974, loc:(*time.Location)(0x797dc20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713957974, loc:(*time.Location)(0x797dc20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"192.168.3.40", PodIP:"10.200.2.13", StartTime:(*v1.Time)(0xc001b5bee0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0022e1b20)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0022e1b90)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://822c91c5f4f142278d066fbdbd62fd624db313bbbe256fe169ba235541639c07"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001b5bf20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001b5bf00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 01:40:23.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-6wr5j" for this suite.
Jan  7 01:40:45.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 01:40:45.511: INFO: namespace: e2e-tests-init-container-6wr5j, resource: bindings, ignored listing per whitelist
Jan  7 01:40:45.570: INFO: namespace e2e-tests-init-container-6wr5j deletion completed in 22.071865584s

• [SLOW TEST:71.001 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
... skipping 2166 lines ...
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-dr4vj
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-dr4vj to expose endpoints map[]
Jan  7 02:07:08.288: INFO: Get endpoints failed (1.977618ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan  7 02:07:09.291: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-dr4vj exposes endpoints map[] (1.004902742s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-dr4vj
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-dr4vj to expose endpoints map[pod1:[100]]
Jan  7 02:07:11.311: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-dr4vj exposes endpoints map[pod1:[100]] (2.014933068s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-dr4vj
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-dr4vj to expose endpoints map[pod2:[101] pod1:[100]]
... skipping 219 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
Jan  7 02:09:33.865: INFO: error from create uninitialized namespace: Internal error occurred: object deleted while waiting for creation
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 02:09:50.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-tsb6m" for this suite.
... skipping 226 lines ...
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  7 02:11:02.021: INFO: Successfully updated pod "pod-update-activedeadlineseconds-f2622f06-30f2-11ea-80c5-e2b8477c9fef"
Jan  7 02:11:02.021: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-f2622f06-30f2-11ea-80c5-e2b8477c9fef" in namespace "e2e-tests-pods-hskk8" to be "terminated due to deadline exceeded"
Jan  7 02:11:02.023: INFO: Pod "pod-update-activedeadlineseconds-f2622f06-30f2-11ea-80c5-e2b8477c9fef": Phase="Running", Reason="", readiness=true. Elapsed: 2.210206ms
Jan  7 02:11:04.026: INFO: Pod "pod-update-activedeadlineseconds-f2622f06-30f2-11ea-80c5-e2b8477c9fef": Phase="Running", Reason="", readiness=true. Elapsed: 2.005365554s
Jan  7 02:11:06.029: INFO: Pod "pod-update-activedeadlineseconds-f2622f06-30f2-11ea-80c5-e2b8477c9fef": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.008237424s
Jan  7 02:11:06.029: INFO: Pod "pod-update-activedeadlineseconds-f2622f06-30f2-11ea-80c5-e2b8477c9fef" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 02:11:06.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-hskk8" for this suite.
Jan  7 02:11:12.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 521 lines ...
STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  7 02:14:48.586: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-grpkl.svc from pod e2e-tests-dns-grpkl/dns-test-79b8a0d8-30f3-11ea-80c5-e2b8477c9fef: the server could not find the requested resource (get pods dns-test-79b8a0d8-30f3-11ea-80c5-e2b8477c9fef)
Jan  7 02:14:48.612: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-grpkl/dns-test-79b8a0d8-30f3-11ea-80c5-e2b8477c9fef: the server could not find the requested resource (get pods dns-test-79b8a0d8-30f3-11ea-80c5-e2b8477c9fef)
Jan  7 02:14:48.648: INFO: Lookups using e2e-tests-dns-grpkl/dns-test-79b8a0d8-30f3-11ea-80c5-e2b8477c9fef failed for: [wheezy_udp@dns-test-service.e2e-tests-dns-grpkl.svc jessie_udp@dns-test-service]

Jan  7 02:14:53.728: INFO: DNS probes using e2e-tests-dns-grpkl/dns-test-79b8a0d8-30f3-11ea-80c5-e2b8477c9fef succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 1096 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-v89bw
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-v89bw
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-v89bw
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-v89bw
Jan  7 02:22:12.139: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-v89bw, name: ss-0, uid: 833302ed-30f4-11ea-bbab-005056b032e6, status phase: Pending. Waiting for statefulset controller to delete.
Jan  7 02:22:12.541: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-v89bw, name: ss-0, uid: 833302ed-30f4-11ea-bbab-005056b032e6, status phase: Failed. Waiting for statefulset controller to delete.
Jan  7 02:22:12.549: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-v89bw, name: ss-0, uid: 833302ed-30f4-11ea-bbab-005056b032e6, status phase: Failed. Waiting for statefulset controller to delete.
Jan  7 02:22:12.553: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-v89bw
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-v89bw
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-v89bw and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  7 02:22:16.576: INFO: Deleting all statefulset in ns e2e-tests-statefulset-v89bw
... skipping 50 lines ...
Jan  7 02:22:35.112: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  7 02:22:35.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-879701837 describe pod redis-master-rzwfz --namespace=e2e-tests-kubectl-d5mzw'
Jan  7 02:22:35.187: INFO: stderr: ""
Jan  7 02:22:35.187: INFO: stdout: "Name:               redis-master-rzwfz\nNamespace:          e2e-tests-kubectl-d5mzw\nPriority:           0\nPriorityClassName:  <none>\nNode:               c02.00bac5a.sk8/192.168.3.176\nStart Time:         Tue, 07 Jan 2020 02:22:32 +0000\nLabels:             app=redis\n                    role=master\nAnnotations:        <none>\nStatus:             Running\nIP:                 10.200.4.42\nControlled By:      ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   containerd://a60eadbca12986a0fe48c4a4f5dcab76bccf0c7f7d263e65a854d27a4f4e5020\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 07 Jan 2020 02:22:33 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rsj6v (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-rsj6v:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-rsj6v\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                      Message\n  ----    ------     ----  ----                      -------\n  Normal  Scheduled  3s    default-scheduler         Successfully assigned e2e-tests-kubectl-d5mzw/redis-master-rzwfz to c02.00bac5a.sk8\n  Normal  Pulled     2s    kubelet, c02.00bac5a.sk8  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, c02.00bac5a.sk8  Created container\n  Normal  Started    2s    kubelet, c02.00bac5a.sk8  Started container\n"
Jan  7 02:22:35.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-879701837 describe rc redis-master --namespace=e2e-tests-kubectl-d5mzw'
Jan  7 02:22:35.272: INFO: stderr: ""
Jan  7 02:22:35.272: INFO: stdout: "Name:         redis-master\nNamespace:    e2e-tests-kubectl-d5mzw\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  3s    replication-controller  Created pod: redis-master-rzwfz\n"
Jan  7 02:22:35.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-879701837 describe service redis-master --namespace=e2e-tests-kubectl-d5mzw'
Jan  7 02:22:35.347: INFO: stderr: ""
Jan  7 02:22:35.347: INFO: stdout: "Name:              redis-master\nNamespace:         e2e-tests-kubectl-d5mzw\nLabels:            app=redis\n                   role=master\nAnnotations:       <none>\nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.32.0.22\nPort:              <unset>  6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.200.4.42:6379\nSession Affinity:  None\nEvents:            <none>\n"
Jan  7 02:22:35.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-879701837 describe node c01.00bac5a.sk8'
Jan  7 02:22:35.438: INFO: stderr: ""
Jan  7 02:22:35.438: INFO: stdout: "Name:               c01.00bac5a.sk8\nRoles:              <none>\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=vsphere-vm.cpu-16.mem-64gb.os-centos7\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/hostname=c01.00bac5a.sk8\nAnnotations:        alpha.kubernetes.io/provided-node-ip: 192.168.3.63\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Tue, 07 Jan 2020 00:26:47 +0000\nTaints:             <none>\nUnschedulable:      false\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Tue, 07 Jan 2020 02:22:25 +0000   Tue, 07 Jan 2020 00:26:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Tue, 07 Jan 2020 02:22:25 +0000   Tue, 07 Jan 2020 00:26:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Tue, 07 Jan 2020 02:22:25 +0000   Tue, 07 Jan 2020 00:26:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Tue, 07 Jan 2020 02:22:25 +0000   Tue, 07 Jan 2020 00:26:47 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  ExternalIP:  192.168.3.63\n  InternalIP:  192.168.3.63\n  Hostname:    c01.00bac5a.sk8\nCapacity:\n cpu:                16\n ephemeral-storage:  103797740Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             65807028Ki\n pods:               110\nAllocatable:\n cpu:                16\n ephemeral-storage:  95659997026\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             65704628Ki\n pods:               110\nSystem Info:\n Machine ID:                 e89dcae7eb6a4ac28a47602ceea57d67\n System UUID:                CDD23042-69FB-3528-BA57-08E9676BA0AB\n Boot ID:                    005c6647-54b4-403d-94ff-e2386084206a\n Kernel Version:             3.10.0-957.5.1.el7.x86_64\n OS Image:                   CentOS Linux 7 (Core)\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  containerd://1.1.7\n Kubelet Version:            v1.13.12\n Kube-Proxy Version:         v1.13.12\nProviderID:                  vsphere://4230d2cd-fb69-2835-ba57-08e9676ba0ab\nNon-terminated Pods:         (2 in total)\n  Namespace                  Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                                       ------------  ----------  ---------------  -------------  ---\n  sk8e2e-ba3c8f4             sonobuoy                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         114m\n  sk8e2e-ba3c8f4             sonobuoy-systemd-logs-daemon-set-c160865c45984961-6mrrm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         114m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                0 (0%)    0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              <none>\n"
... skipping 845 lines ...
Jan  7 02:38:35.249: INFO: Waiting for Pod e2e-tests-statefulset-gm99s/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 02:38:35.249: INFO: Waiting for Pod e2e-tests-statefulset-gm99s/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 02:38:35.253: INFO: Waiting for StatefulSet e2e-tests-statefulset-gm99s/ss2 to complete update
Jan  7 02:38:35.253: INFO: Waiting for Pod e2e-tests-statefulset-gm99s/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 02:38:35.253: INFO: Waiting for Pod e2e-tests-statefulset-gm99s/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 02:38:35.253: INFO: Waiting for Pod e2e-tests-statefulset-gm99s/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 02:38:35.253: INFO: Failed waiting for state update: timed out waiting for the condition
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.13.12-beta.0.14+a8b52209ee1722/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  7 02:38:35.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-879701837 describe po ss2-0 --namespace=e2e-tests-statefulset-gm99s'
Jan  7 02:38:35.513: INFO: stderr: ""
Jan  7 02:38:35.513: INFO: stdout: "Name:               ss2-0\nNamespace:          e2e-tests-statefulset-gm99s\nPriority:           0\nPriorityClassName:  <none>\nNode:               w03.00bac5a.sk8/192.168.3.223\nStart Time:         Tue, 07 Jan 2020 02:27:54 +0000\nLabels:             baz=blah\n                    controller-revision-hash=ss2-7c9b54fd4c\n                    foo=bar\n                    statefulset.kubernetes.io/pod-name=ss2-0\nAnnotations:        <none>\nStatus:             Running\nIP:                 10.200.0.46\nControlled By:      StatefulSet/ss2\nContainers:\n  nginx:\n    Container ID:   containerd://7b260adfe74e4f74e6b520274ce299838cde0ddaee6e39162a72f980a6f545aa\n    Image:          docker.io/library/nginx:1.14-alpine\n    Image ID:       docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\n    Port:           <none>\n    Host Port:      <none>\n    State:          Running\n      Started:      Tue, 07 Jan 2020 02:27:55 +0000\n    Ready:          True\n    Restart Count:  0\n    Readiness:      http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-b79q4 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-b79q4:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-b79q4\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                      Message\n  ----    ------     ----  ----                      -------\n  Normal  Scheduled  10m   default-scheduler         Successfully assigned e2e-tests-statefulset-gm99s/ss2-0 to w03.00bac5a.sk8\n  Normal  Pulled     10m   kubelet, w03.00bac5a.sk8  Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\n  Normal  Created    10m   kubelet, w03.00bac5a.sk8  Created container\n  Normal  Started    10m   kubelet, w03.00bac5a.sk8  Started container\n"
Jan  7 02:38:35.513: INFO: 
... skipping 155 lines ...
10.200.0.1 - - [07/Jan/2020:02:38:33 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.13" "-"
10.200.0.1 - - [07/Jan/2020:02:38:34 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.13" "-"
10.200.0.1 - - [07/Jan/2020:02:38:35 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.13" "-"

Jan  7 02:38:35.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-879701837 describe po ss2-1 --namespace=e2e-tests-statefulset-gm99s'
Jan  7 02:38:35.689: INFO: stderr: ""
Jan  7 02:38:35.689: INFO: stdout: "Name:               ss2-1\nNamespace:          e2e-tests-statefulset-gm99s\nPriority:           0\nPriorityClassName:  <none>\nNode:               c02.00bac5a.sk8/192.168.3.176\nStart Time:         Tue, 07 Jan 2020 02:27:56 +0000\nLabels:             baz=blah\n                    controller-revision-hash=ss2-7c9b54fd4c\n                    foo=bar\n                    statefulset.kubernetes.io/pod-name=ss2-1\nAnnotations:        <none>\nStatus:             Running\nIP:                 10.200.4.49\nControlled By:      StatefulSet/ss2\nContainers:\n  nginx:\n    Container ID:   containerd://b060f8fd3743d0c90c6005500933226a08028e5b23837d9c218bb9ba19b167da\n    Image:          docker.io/library/nginx:1.14-alpine\n    Image ID:       docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\n    Port:           <none>\n    Host Port:      <none>\n    State:          Running\n      Started:      Tue, 07 Jan 2020 02:27:57 +0000\n    Ready:          True\n    Restart Count:  0\n    Readiness:      http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-b79q4 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-b79q4:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-b79q4\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type     Reason     Age                 From                      Message\n  ----     ------     ----                ----                      -------\n  Normal   Scheduled  10m                 default-scheduler         Successfully assigned e2e-tests-statefulset-gm99s/ss2-1 to c02.00bac5a.sk8\n  Normal   Pulled     10m                 kubelet, c02.00bac5a.sk8  Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\n  Normal   Created    10m                 kubelet, c02.00bac5a.sk8  Created container\n  Normal   Started    10m                 kubelet, c02.00bac5a.sk8  Started container\n  Warning  Unhealthy  10m (x21 over 10m)  kubelet, c02.00bac5a.sk8  Readiness probe failed: HTTP probe failed with statuscode: 404\n"
Jan  7 02:38:35.689: INFO: 
Output of kubectl describe ss2-1:
Name:               ss2-1
Namespace:          e2e-tests-statefulset-gm99s
Priority:           0
PriorityClassName:  <none>
... skipping 41 lines ...
  Type     Reason     Age                 From                      Message
  ----     ------     ----                ----                      -------
  Normal   Scheduled  10m                 default-scheduler         Successfully assigned e2e-tests-statefulset-gm99s/ss2-1 to c02.00bac5a.sk8
  Normal   Pulled     10m                 kubelet, c02.00bac5a.sk8  Container image "docker.io/library/nginx:1.14-alpine" already present on machine
  Normal   Created    10m                 kubelet, c02.00bac5a.sk8  Created container
  Normal   Started    10m                 kubelet, c02.00bac5a.sk8  Started container
  Warning  Unhealthy  10m (x21 over 10m)  kubelet, c02.00bac5a.sk8  Readiness probe failed: HTTP probe failed with statuscode: 404

Jan  7 02:38:35.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-879701837 logs ss2-1 --namespace=e2e-tests-statefulset-gm99s --tail=100'
Jan  7 02:38:35.772: INFO: stderr: ""
Jan  7 02:38:35.772: INFO: stdout: "10.200.4.1 - - [07/Jan/2020:02:36:56 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:36:57 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:36:58 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:36:59 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:00 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:01 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:02 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:03 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:04 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:05 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:06 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:07 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:08 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:09 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:10 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:11 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:12 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:13 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:14 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:15 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:16 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:17 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:18 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:19 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:20 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:21 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:22 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:23 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:24 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:25 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:26 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:27 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:28 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:29 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:30 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:31 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:32 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:33 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:34 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:35 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:36 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:37 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:38 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:39 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:40 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:41 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:42 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:43 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:44 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:45 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:46 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:47 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:48 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:49 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:50 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:51 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:52 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:53 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:54 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:55 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:56 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:57 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:58 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:37:59 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:00 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:01 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:02 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:03 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:04 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:05 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:06 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:07 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:08 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:09 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:10 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:11 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:12 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:13 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:14 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:15 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:16 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:17 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:18 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:19 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:20 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:21 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:22 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:23 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:24 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:25 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:26 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:27 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:28 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:29 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:30 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:31 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:32 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:33 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:34 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n10.200.4.1 - - [07/Jan/2020:02:38:35 +0000] \"GET /index.html HTTP/1.1\" 200 612 \"-\" \"kube-probe/1.13\" \"-\"\n"
Jan  7 02:38:35.772: INFO: 
Last 100 log lines of ss2-1:
... skipping 97 lines ...
10.200.4.1 - - [07/Jan/2020:02:38:33 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.13" "-"
10.200.4.1 - - [07/Jan/2020:02:38:34 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.13" "-"
10.200.4.1 - - [07/Jan/2020:02:38:35 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.13" "-"

Jan  7 02:38:35.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-879701837 describe po ss2-2 --namespace=e2e-tests-statefulset-gm99s'
Jan  7 02:38:35.857: INFO: stderr: ""
Jan  7 02:38:35.857: INFO: stdout: "Name:                      ss2-2\nNamespace:                 e2e-tests-statefulset-gm99s\nPriority:                  0\nPriorityClassName:         <none>\nNode:                      w01.00bac5a.sk8/192.168.3.40\nStart Time:                Tue, 07 Jan 2020 02:27:58 +0000\nLabels:                    baz=blah\n                           controller-revision-hash=ss2-7c9b54fd4c\n                           foo=bar\n                           statefulset.kubernetes.io/pod-name=ss2-2\nAnnotations:               <none>\nStatus:                    Terminating (lasts <invalid>)\nTermination Grace Period:  30s\nIP:                        10.200.2.43\nControlled By:             StatefulSet/ss2\nContainers:\n  nginx:\n    Container ID:   containerd://5b2b28bc8f0034c51889eb27eea9fc8fbadaf0263e19a19809e81033be809cf4\n    Image:          docker.io/library/nginx:1.14-alpine\n    Image ID:       docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\n    Port:           <none>\n    Host Port:      <none>\n    State:          Terminated\n      Exit Code:    0\n      Started:      Mon, 01 Jan 0001 00:00:00 +0000\n      Finished:     Mon, 01 Jan 0001 00:00:00 +0000\n    Ready:          False\n    Restart Count:  0\n    Readiness:      http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-b79q4 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             False \n  ContainersReady   False \n  PodScheduled      True \nVolumes:\n  default-token-b79q4:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-b79q4\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type     Reason         Age   From                      Message\n  ----     ------         ----  ----                      -------\n  Normal   Scheduled      10m   default-scheduler         Successfully assigned e2e-tests-statefulset-gm99s/ss2-2 to w01.00bac5a.sk8\n  Normal   Pulled         10m   kubelet, w01.00bac5a.sk8  Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\n  Normal   Created        10m   kubelet, w01.00bac5a.sk8  Created container\n  Normal   Started        10m   kubelet, w01.00bac5a.sk8  Started container\n  Normal   Killing        10m   kubelet, w01.00bac5a.sk8  Killing container with id containerd://nginx:Need to kill Pod\n  Warning  FailedKillPod  10m   kubelet, w01.00bac5a.sk8  error killing pod: failed to \"KillPodSandbox\" for \"5193f7e1-30f5-11ea-bbab-005056b032e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to remove network namespace for sandbox \\\"2b6de4b4871b00e23ffb2e913ca0a7fc2a03433a8449dd93b293aecabbffa68f\\\": failed to close network namespace: Failed to clean up namespace /var/run/netns/cni-5600ac24-c7c2-196b-1be4-a5ecc06e26b2: remove /var/run/netns/cni-5600ac24-c7c2-196b-1be4-a5ecc06e26b2: device or resource busy\"\n  Warning  FailedKillPod  10m   kubelet, w01.00bac5a.sk8  error killing pod: failed to \"KillPodSandbox\" for \"5193f7e1-30f5-11ea-bbab-005056b032e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b6de4b4871b00e23ffb2e913ca0a7fc2a03433a8449dd93b293aecabbffa68f\\\": unknown FS magic on \\\"/var/run/netns/cni-5600ac24-c7c2-196b-1be4-a5ecc06e26b2\\\": 1021994\"\n"
Jan  7 02:38:35.857: INFO: 
Output of kubectl describe ss2-2:
Name:                      ss2-2
Namespace:                 e2e-tests-statefulset-gm99s
Priority:                  0
PriorityClassName:         <none>
... skipping 45 lines ...
  ----     ------         ----  ----                      -------
  Normal   Scheduled      10m   default-scheduler         Successfully assigned e2e-tests-statefulset-gm99s/ss2-2 to w01.00bac5a.sk8
  Normal   Pulled         10m   kubelet, w01.00bac5a.sk8  Container image "docker.io/library/nginx:1.14-alpine" already present on machine
  Normal   Created        10m   kubelet, w01.00bac5a.sk8  Created container
  Normal   Started        10m   kubelet, w01.00bac5a.sk8  Started container
  Normal   Killing        10m   kubelet, w01.00bac5a.sk8  Killing container with id containerd://nginx:Need to kill Pod
  Warning  FailedKillPod  10m   kubelet, w01.00bac5a.sk8  error killing pod: failed to "KillPodSandbox" for "5193f7e1-30f5-11ea-bbab-005056b032e6" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to remove network namespace for sandbox \"2b6de4b4871b00e23ffb2e913ca0a7fc2a03433a8449dd93b293aecabbffa68f\": failed to close network namespace: Failed to clean up namespace /var/run/netns/cni-5600ac24-c7c2-196b-1be4-a5ecc06e26b2: remove /var/run/netns/cni-5600ac24-c7c2-196b-1be4-a5ecc06e26b2: device or resource busy"
  Warning  FailedKillPod  10m   kubelet, w01.00bac5a.sk8  error killing pod: failed to "KillPodSandbox" for "5193f7e1-30f5-11ea-bbab-005056b032e6" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"2b6de4b4871b00e23ffb2e913ca0a7fc2a03433a8449dd93b293aecabbffa68f\": unknown FS magic on \"/var/run/netns/cni-5600ac24-c7c2-196b-1be4-a5ecc06e26b2\": 1021994"

Jan  7 02:38:35.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-879701837 logs ss2-2 --namespace=e2e-tests-statefulset-gm99s --tail=100'
Jan  7 02:38:35.949: INFO: rc: 1
Jan  7 02:38:35.949: INFO: 
Last 100 log lines of ss2-2:

... skipping 49 lines ...
Jan  7 02:53:45.975: INFO: Waiting for stateful set status.replicas to become 0, currently 3
Jan  7 02:53:55.974: INFO: Waiting for stateful set status.replicas to become 0, currently 3
Jan  7 02:54:05.974: INFO: Waiting for stateful set status.replicas to become 0, currently 3
Jan  7 02:54:15.974: INFO: Waiting for stateful set status.replicas to become 0, currently 3
Jan  7 02:54:25.974: INFO: Waiting for stateful set status.replicas to become 0, currently 3
make: *** [Makefile:229: conformance-test] Terminated
{"component":"entrypoint","file":"prow/entrypoint/run.go:168","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","time":"2020-01-07T02:54:28Z"}