This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-10-23 09:22
Elapsed2h25m
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 605 lines ...
Trying to find master named 'bootstrap-e2e-master'
Looking for address 'bootstrap-e2e-master-ip'
Using master: bootstrap-e2e-master (external IP: 34.82.199.23; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

.........................Kubernetes cluster created.
Cluster "k8s-jkns-gci-gce-sd-log_bootstrap-e2e" set.
User "k8s-jkns-gci-gce-sd-log_bootstrap-e2e" set.
Context "k8s-jkns-gci-gce-sd-log_bootstrap-e2e" created.
Switched to context "k8s-jkns-gci-gce-sd-log_bootstrap-e2e".
... skipping 23 lines ...
bootstrap-e2e-minion-group-0324   Ready                      <none>   14s   v1.20.0-alpha.3.84+3627a282799b32
bootstrap-e2e-minion-group-jt1z   Ready                      <none>   11s   v1.20.0-alpha.3.84+3627a282799b32
bootstrap-e2e-minion-group-xbjm   Ready                      <none>   12s   v1.20.0-alpha.3.84+3627a282799b32
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
Cluster validation succeeded
Done, listing cluster services:
... skipping 70 lines ...
Zone: us-west1-b
Dumping logs from master locally to '/logs/artifacts/before'
Trying to find master named 'bootstrap-e2e-master'
Looking for address 'bootstrap-e2e-master-ip'
Using master: bootstrap-e2e-master (external IP: 34.82.199.23; internal IP: (not set))
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Changing logfiles to be world-readable for download
Copying 'kube-apiserver.log kube-apiserver-audit.log kube-scheduler.log kube-controller-manager.log etcd.log etcd-events.log glbc.log cluster-autoscaler.log kube-addon-manager.log konnectivity-server.log fluentd.log kubelet.cov startupscript.log' from bootstrap-e2e-master

Specify --start=56989 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/konnectivity-server.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts/before'
Detecting nodes in the cluster
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Changing logfiles to be world-readable for download
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Changing logfiles to be world-readable for download
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-xbjm
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-jt1z
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-0324

Specify --start=67921 in the next get-serial-port-output invocation to get only the new output starting from here.
... skipping 5 lines ...
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=bootstrap-e2e-minion-group
NODE_NAMES=bootstrap-e2e-minion-group-0324 bootstrap-e2e-minion-group-jt1z bootstrap-e2e-minion-group-xbjm
Failures for bootstrap-e2e-minion-group (if any):
2020/10/23 09:50:10 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts/before' finished in 1m59.517742266s
2020/10/23 09:50:10 process.go:153: Running: ./hack/e2e-internal/e2e-status.sh
Project: k8s-jkns-gci-gce-sd-log
... skipping 14 lines ...
Using master: bootstrap-e2e-master (external IP: 34.82.199.23; internal IP: (not set))
Oct 23 09:50:13.495: INFO: Fetching cloud provider for "gce"
I1023 09:50:13.495616  144144 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
I1023 09:50:13.496270  144144 gce.go:903] Using DefaultTokenSource &oauth2.reuseTokenSource{new:jwt.jwtSource{ctx:(*context.emptyCtx)(0xc0001a2010), conf:(*jwt.Config)(0xc001b760a0)}, mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(nil)}
W1023 09:50:13.683322  144144 gce.go:474] No network name or URL specified.
I1023 09:50:13.683538  144144 e2e.go:129] Starting e2e run "4769fc99-090d-4e1d-972f-598b49164a1a" on Ginkgo node 1
{"msg":"Test Suite starting","total":306,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1603446612 - Will randomize all specs
Will run 306 of 5229 specs

Oct 23 09:50:18.424: INFO: cluster-master-image: cos-85-13310-1041-9
... skipping 21 lines ...
STEP: Building a namespace api object, basename container-runtime
Oct 23 09:50:19.233: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Oct 23 09:50:22.640: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 09:50:22.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7476" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":306,"completed":1,"skipped":33,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 09:50:33.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3338" for this suite.
STEP: Destroying namespace "webhook-3338-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":306,"completed":2,"skipped":48,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 23 lines ...
Oct 23 09:50:54.740: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 09:50:54.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7610" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":306,"completed":3,"skipped":55,"failed":0}
SSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 62 lines ...
• [SLOW TEST:305.645 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":306,"completed":4,"skipped":61,"failed":0}
SSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Oct 23 09:56:00.563: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test substitution in container's command
Oct 23 09:56:00.825: INFO: Waiting up to 5m0s for pod "var-expansion-5e15b9f3-0fb1-4c9a-82c4-5d0e227b2f20" in namespace "var-expansion-902" to be "Succeeded or Failed"
Oct 23 09:56:00.875: INFO: Pod "var-expansion-5e15b9f3-0fb1-4c9a-82c4-5d0e227b2f20": Phase="Pending", Reason="", readiness=false. Elapsed: 50.069873ms
Oct 23 09:56:02.992: INFO: Pod "var-expansion-5e15b9f3-0fb1-4c9a-82c4-5d0e227b2f20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.167386299s
STEP: Saw pod success
Oct 23 09:56:02.992: INFO: Pod "var-expansion-5e15b9f3-0fb1-4c9a-82c4-5d0e227b2f20" satisfied condition "Succeeded or Failed"
Oct 23 09:56:03.045: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod var-expansion-5e15b9f3-0fb1-4c9a-82c4-5d0e227b2f20 container dapi-container: <nil>
STEP: delete the pod
Oct 23 09:56:03.176: INFO: Waiting for pod var-expansion-5e15b9f3-0fb1-4c9a-82c4-5d0e227b2f20 to disappear
Oct 23 09:56:03.218: INFO: Pod var-expansion-5e15b9f3-0fb1-4c9a-82c4-5d0e227b2f20 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 09:56:03.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-902" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":306,"completed":5,"skipped":67,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 09:56:20.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9793" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":306,"completed":6,"skipped":80,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 09:56:45.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1921" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":306,"completed":7,"skipped":84,"failed":0}
SSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap configmap-3593/configmap-test-78f678cd-2ac9-40e5-bc66-b403b565b757
STEP: Creating a pod to test consume configMaps
Oct 23 09:56:46.013: INFO: Waiting up to 5m0s for pod "pod-configmaps-92f7b44d-8964-4e34-8c8a-854ab8d49ab0" in namespace "configmap-3593" to be "Succeeded or Failed"
Oct 23 09:56:46.050: INFO: Pod "pod-configmaps-92f7b44d-8964-4e34-8c8a-854ab8d49ab0": Phase="Pending", Reason="", readiness=false. Elapsed: 36.773009ms
Oct 23 09:56:48.087: INFO: Pod "pod-configmaps-92f7b44d-8964-4e34-8c8a-854ab8d49ab0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.073504652s
STEP: Saw pod success
Oct 23 09:56:48.087: INFO: Pod "pod-configmaps-92f7b44d-8964-4e34-8c8a-854ab8d49ab0" satisfied condition "Succeeded or Failed"
Oct 23 09:56:48.123: INFO: Trying to get logs from node bootstrap-e2e-minion-group-xbjm pod pod-configmaps-92f7b44d-8964-4e34-8c8a-854ab8d49ab0 container env-test: <nil>
STEP: delete the pod
Oct 23 09:56:48.234: INFO: Waiting for pod pod-configmaps-92f7b44d-8964-4e34-8c8a-854ab8d49ab0 to disappear
Oct 23 09:56:48.272: INFO: Pod pod-configmaps-92f7b44d-8964-4e34-8c8a-854ab8d49ab0 no longer exists
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 09:56:48.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3593" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":306,"completed":8,"skipped":93,"failed":0}

------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Deployment
... skipping 25 lines ...
Oct 23 09:56:52.415: INFO: Pod "test-recreate-deployment-f79dd4667-j2j8f" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-j2j8f test-recreate-deployment-f79dd4667- deployment-2750  46b82eac-1224-40ec-aaa5-402a4a2adf67 2054 0 2020-10-23 09:56:51 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 a788e055-20a0-4c56-9100-18e401d27356 0xc004226a40 0xc004226a41}] []  [{kube-controller-manager Update v1 2020-10-23 09:56:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a788e055-20a0-4c56-9100-18e401d27356\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-23 09:56:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fshbn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fshbn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fshbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-0324,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-23 09:56:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-23 09:56:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-23 09:56:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-23 09:56:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.4,PodIP:,StartTime:2020-10-23 09:56:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 09:56:52.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2750" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":306,"completed":9,"skipped":93,"failed":0}
SS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 9 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 09:56:52.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4504" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":306,"completed":10,"skipped":95,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Oct 23 09:56:53.006: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=kubectl-6781 proxy --unix-socket=/tmp/kubectl-proxy-unix061659208/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 09:56:53.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6781" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":306,"completed":11,"skipped":104,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Oct 23 09:57:01.524: INFO: stderr: ""
Oct 23 09:57:01.524: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5151-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 09:57:05.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4330" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":306,"completed":12,"skipped":117,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-fbfba71d-2ead-41ad-93b2-21f3768377bb
STEP: Creating a pod to test consume secrets
Oct 23 09:57:05.954: INFO: Waiting up to 5m0s for pod "pod-secrets-9c508baa-77ed-4596-9cbe-c388e7ae4917" in namespace "secrets-1152" to be "Succeeded or Failed"
Oct 23 09:57:06.039: INFO: Pod "pod-secrets-9c508baa-77ed-4596-9cbe-c388e7ae4917": Phase="Pending", Reason="", readiness=false. Elapsed: 85.660067ms
Oct 23 09:57:08.078: INFO: Pod "pod-secrets-9c508baa-77ed-4596-9cbe-c388e7ae4917": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124080655s
Oct 23 09:57:10.116: INFO: Pod "pod-secrets-9c508baa-77ed-4596-9cbe-c388e7ae4917": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162795027s
Oct 23 09:57:12.188: INFO: Pod "pod-secrets-9c508baa-77ed-4596-9cbe-c388e7ae4917": Phase="Running", Reason="", readiness=true. Elapsed: 6.234160377s
Oct 23 09:57:14.228: INFO: Pod "pod-secrets-9c508baa-77ed-4596-9cbe-c388e7ae4917": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.274145523s
STEP: Saw pod success
Oct 23 09:57:14.228: INFO: Pod "pod-secrets-9c508baa-77ed-4596-9cbe-c388e7ae4917" satisfied condition "Succeeded or Failed"
Oct 23 09:57:14.267: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-secrets-9c508baa-77ed-4596-9cbe-c388e7ae4917 container secret-volume-test: <nil>
STEP: delete the pod
Oct 23 09:57:14.354: INFO: Waiting for pod pod-secrets-9c508baa-77ed-4596-9cbe-c388e7ae4917 to disappear
Oct 23 09:57:14.396: INFO: Pod pod-secrets-9c508baa-77ed-4596-9cbe-c388e7ae4917 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 09:57:14.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1152" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":13,"skipped":152,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Oct 23 09:57:16.835: INFO: Initial restart count of pod liveness-e247e4a4-414e-461f-a94d-16ccc1d83768 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:01:18.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8697" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":306,"completed":14,"skipped":167,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 23 10:01:18.252: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0777 on tmpfs
Oct 23 10:01:18.490: INFO: Waiting up to 5m0s for pod "pod-18e53a21-71d8-43af-976f-ba984d9a51e0" in namespace "emptydir-7696" to be "Succeeded or Failed"
Oct 23 10:01:18.528: INFO: Pod "pod-18e53a21-71d8-43af-976f-ba984d9a51e0": Phase="Pending", Reason="", readiness=false. Elapsed: 38.528138ms
Oct 23 10:01:20.596: INFO: Pod "pod-18e53a21-71d8-43af-976f-ba984d9a51e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.105907102s
STEP: Saw pod success
Oct 23 10:01:20.596: INFO: Pod "pod-18e53a21-71d8-43af-976f-ba984d9a51e0" satisfied condition "Succeeded or Failed"
Oct 23 10:01:20.634: INFO: Trying to get logs from node bootstrap-e2e-minion-group-xbjm pod pod-18e53a21-71d8-43af-976f-ba984d9a51e0 container test-container: <nil>
STEP: delete the pod
Oct 23 10:01:20.733: INFO: Waiting for pod pod-18e53a21-71d8-43af-976f-ba984d9a51e0 to disappear
Oct 23 10:01:20.771: INFO: Pod pod-18e53a21-71d8-43af-976f-ba984d9a51e0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:01:20.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7696" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":15,"skipped":169,"failed":0}
SSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Probing container
... skipping 17 lines ...
Oct 23 10:03:52.840: INFO: Restart count of pod container-probe-5541/liveness-94ad823b-ee6e-417d-81c6-d2f40166a474 is now 5 (2m29.546081084s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:03:52.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5541" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":306,"completed":16,"skipped":173,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 178 lines ...
Oct 23 10:06:21.674: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"3393"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:06:21.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-849" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":306,"completed":17,"skipped":180,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 97 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:06:26.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2001" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":306,"completed":18,"skipped":197,"failed":0}
SSSSS
------------------------------
[sig-node] ConfigMap 
  should run through a ConfigMap lifecycle [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] ConfigMap
... skipping 11 lines ...
STEP: deleting the ConfigMap by collection with a label selector
STEP: listing all ConfigMaps in test namespace
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:06:27.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5110" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":306,"completed":19,"skipped":202,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 11 lines ...
STEP: creating replication controller externalname-service in namespace services-4634
I1023 10:06:27.958131  144144 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4634, replica count: 2
I1023 10:06:31.058747  144144 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 23 10:06:31.058: INFO: Creating new exec pod
Oct 23 10:06:34.238: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-4634 exec execpodlx6m2 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Oct 23 10:06:35.882: INFO: rc: 1
Oct 23 10:06:35.882: INFO: Service reachability failing with error: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-4634 exec execpodlx6m2 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 23 10:06:36.883: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-4634 exec execpodlx6m2 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Oct 23 10:06:38.426: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
Oct 23 10:06:38.426: INFO: stdout: ""
Oct 23 10:06:38.427: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-4634 exec execpodlx6m2 -- /bin/sh -x -c nc -zv -t -w 2 10.0.209.148 80'
... skipping 3 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:06:38.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4634" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":306,"completed":20,"skipped":230,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 24 lines ...
Oct 23 10:06:40.347: INFO: created pod pod-service-account-nomountsa-nomountspec
Oct 23 10:06:40.347: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:06:40.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-511" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":306,"completed":21,"skipped":250,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Oct 23 10:06:48.028: INFO: stderr: ""
Oct 23 10:06:48.028: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7305-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t<string>\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t<string>\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t<Object>\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t<map[string]>\n     Specification of Waldo\n\n   status\t<Object>\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:06:52.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3898" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":306,"completed":22,"skipped":257,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should be possible to delete [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [k8s.io] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:06:53.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2066" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":306,"completed":23,"skipped":279,"failed":0}
SSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] ReplicaSet
... skipping 11 lines ...
Oct 23 10:06:55.660: INFO: Trying to dial the pod
Oct 23 10:07:00.786: INFO: Controller my-hostname-basic-8c87a5f1-2938-484b-bba5-07d77ac071c3: Got expected result from replica 1 [my-hostname-basic-8c87a5f1-2938-484b-bba5-07d77ac071c3-5gvs6]: "my-hostname-basic-8c87a5f1-2938-484b-bba5-07d77ac071c3-5gvs6", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:07:00.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6249" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":306,"completed":24,"skipped":283,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Oct 23 10:07:00.876: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test substitution in container's args
Oct 23 10:07:01.184: INFO: Waiting up to 5m0s for pod "var-expansion-13bac30d-961e-49f2-a5e5-447f13dd2776" in namespace "var-expansion-8886" to be "Succeeded or Failed"
Oct 23 10:07:01.224: INFO: Pod "var-expansion-13bac30d-961e-49f2-a5e5-447f13dd2776": Phase="Pending", Reason="", readiness=false. Elapsed: 40.040852ms
Oct 23 10:07:03.283: INFO: Pod "var-expansion-13bac30d-961e-49f2-a5e5-447f13dd2776": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098979592s
Oct 23 10:07:05.324: INFO: Pod "var-expansion-13bac30d-961e-49f2-a5e5-447f13dd2776": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.139470629s
STEP: Saw pod success
Oct 23 10:07:05.324: INFO: Pod "var-expansion-13bac30d-961e-49f2-a5e5-447f13dd2776" satisfied condition "Succeeded or Failed"
Oct 23 10:07:05.365: INFO: Trying to get logs from node bootstrap-e2e-minion-group-jt1z pod var-expansion-13bac30d-961e-49f2-a5e5-447f13dd2776 container dapi-container: <nil>
STEP: delete the pod
Oct 23 10:07:05.475: INFO: Waiting for pod var-expansion-13bac30d-961e-49f2-a5e5-447f13dd2776 to disappear
Oct 23 10:07:05.515: INFO: Pod var-expansion-13bac30d-961e-49f2-a5e5-447f13dd2776 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:07:05.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8886" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":306,"completed":25,"skipped":370,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Oct 23 10:07:05.840: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 23 10:07:10.448: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:07:29.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5973" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":306,"completed":26,"skipped":385,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-c0a03417-ea88-4ea5-ab94-d9b41d2a05b5
STEP: Creating a pod to test consume configMaps
Oct 23 10:07:30.227: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-177b49d6-0f42-45a5-a6e7-1f08fe552f13" in namespace "projected-1856" to be "Succeeded or Failed"
Oct 23 10:07:30.477: INFO: Pod "pod-projected-configmaps-177b49d6-0f42-45a5-a6e7-1f08fe552f13": Phase="Pending", Reason="", readiness=false. Elapsed: 249.44774ms
Oct 23 10:07:32.515: INFO: Pod "pod-projected-configmaps-177b49d6-0f42-45a5-a6e7-1f08fe552f13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.288009844s
STEP: Saw pod success
Oct 23 10:07:32.515: INFO: Pod "pod-projected-configmaps-177b49d6-0f42-45a5-a6e7-1f08fe552f13" satisfied condition "Succeeded or Failed"
Oct 23 10:07:32.553: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-projected-configmaps-177b49d6-0f42-45a5-a6e7-1f08fe552f13 container agnhost-container: <nil>
STEP: delete the pod
Oct 23 10:07:32.656: INFO: Waiting for pod pod-projected-configmaps-177b49d6-0f42-45a5-a6e7-1f08fe552f13 to disappear
Oct 23 10:07:32.693: INFO: Pod pod-projected-configmaps-177b49d6-0f42-45a5-a6e7-1f08fe552f13 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:07:32.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1856" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":306,"completed":27,"skipped":412,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 20 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:07:38.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-4884" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":306,"completed":28,"skipped":413,"failed":0}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 23 10:07:39.429: INFO: Waiting up to 5m0s for pod "downwardapi-volume-15536b87-76e6-48ca-b8ac-5d4e06cef03c" in namespace "downward-api-3610" to be "Succeeded or Failed"
Oct 23 10:07:39.474: INFO: Pod "downwardapi-volume-15536b87-76e6-48ca-b8ac-5d4e06cef03c": Phase="Pending", Reason="", readiness=false. Elapsed: 44.97751ms
Oct 23 10:07:41.705: INFO: Pod "downwardapi-volume-15536b87-76e6-48ca-b8ac-5d4e06cef03c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.276507048s
STEP: Saw pod success
Oct 23 10:07:41.706: INFO: Pod "downwardapi-volume-15536b87-76e6-48ca-b8ac-5d4e06cef03c" satisfied condition "Succeeded or Failed"
Oct 23 10:07:41.934: INFO: Trying to get logs from node bootstrap-e2e-minion-group-xbjm pod downwardapi-volume-15536b87-76e6-48ca-b8ac-5d4e06cef03c container client-container: <nil>
STEP: delete the pod
Oct 23 10:07:42.988: INFO: Waiting for pod downwardapi-volume-15536b87-76e6-48ca-b8ac-5d4e06cef03c to disappear
Oct 23 10:07:43.062: INFO: Pod downwardapi-volume-15536b87-76e6-48ca-b8ac-5d4e06cef03c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:07:43.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3610" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":306,"completed":29,"skipped":415,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 23 10:07:43.217: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0777 on node default medium
Oct 23 10:07:43.682: INFO: Waiting up to 5m0s for pod "pod-4753fd4a-9ece-40cf-9f44-2f0013177a98" in namespace "emptydir-6147" to be "Succeeded or Failed"
Oct 23 10:07:43.762: INFO: Pod "pod-4753fd4a-9ece-40cf-9f44-2f0013177a98": Phase="Pending", Reason="", readiness=false. Elapsed: 79.891363ms
Oct 23 10:07:45.798: INFO: Pod "pod-4753fd4a-9ece-40cf-9f44-2f0013177a98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.116365141s
STEP: Saw pod success
Oct 23 10:07:45.798: INFO: Pod "pod-4753fd4a-9ece-40cf-9f44-2f0013177a98" satisfied condition "Succeeded or Failed"
Oct 23 10:07:45.834: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-4753fd4a-9ece-40cf-9f44-2f0013177a98 container test-container: <nil>
STEP: delete the pod
Oct 23 10:07:45.928: INFO: Waiting for pod pod-4753fd4a-9ece-40cf-9f44-2f0013177a98 to disappear
Oct 23 10:07:45.964: INFO: Pod pod-4753fd4a-9ece-40cf-9f44-2f0013177a98 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:07:45.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6147" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":30,"skipped":449,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Pods
... skipping 15 lines ...
STEP: deleting the pod gracefully
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:08:02.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8510" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":306,"completed":31,"skipped":480,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 51 lines ...
Oct 23 10:08:21.522: INFO: stderr: ""
Oct 23 10:08:21.522: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:08:21.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6935" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":306,"completed":32,"skipped":482,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name projected-secret-test-ad5ffb5b-fd1b-49d6-96d5-ddc3fc6137fe
STEP: Creating a pod to test consume secrets
Oct 23 10:08:21.863: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b5d507ec-0e03-4771-a6cc-476c1f8257b9" in namespace "projected-8483" to be "Succeeded or Failed"
Oct 23 10:08:21.899: INFO: Pod "pod-projected-secrets-b5d507ec-0e03-4771-a6cc-476c1f8257b9": Phase="Pending", Reason="", readiness=false. Elapsed: 35.889109ms
Oct 23 10:08:23.939: INFO: Pod "pod-projected-secrets-b5d507ec-0e03-4771-a6cc-476c1f8257b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.075624054s
STEP: Saw pod success
Oct 23 10:08:23.939: INFO: Pod "pod-projected-secrets-b5d507ec-0e03-4771-a6cc-476c1f8257b9" satisfied condition "Succeeded or Failed"
Oct 23 10:08:23.976: INFO: Trying to get logs from node bootstrap-e2e-minion-group-xbjm pod pod-projected-secrets-b5d507ec-0e03-4771-a6cc-476c1f8257b9 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 23 10:08:24.065: INFO: Waiting for pod pod-projected-secrets-b5d507ec-0e03-4771-a6cc-476c1f8257b9 to disappear
Oct 23 10:08:24.102: INFO: Pod pod-projected-secrets-b5d507ec-0e03-4771-a6cc-476c1f8257b9 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:08:24.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8483" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":33,"skipped":489,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-map-f911ddb5-a715-46a0-8298-249bc5572857
STEP: Creating a pod to test consume configMaps
Oct 23 10:08:24.470: INFO: Waiting up to 5m0s for pod "pod-configmaps-feb96726-7b4e-4236-84d0-b8f5da5ef145" in namespace "configmap-4255" to be "Succeeded or Failed"
Oct 23 10:08:24.514: INFO: Pod "pod-configmaps-feb96726-7b4e-4236-84d0-b8f5da5ef145": Phase="Pending", Reason="", readiness=false. Elapsed: 43.618937ms
Oct 23 10:08:26.572: INFO: Pod "pod-configmaps-feb96726-7b4e-4236-84d0-b8f5da5ef145": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.101349661s
STEP: Saw pod success
Oct 23 10:08:26.572: INFO: Pod "pod-configmaps-feb96726-7b4e-4236-84d0-b8f5da5ef145" satisfied condition "Succeeded or Failed"
Oct 23 10:08:26.669: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-configmaps-feb96726-7b4e-4236-84d0-b8f5da5ef145 container configmap-volume-test: <nil>
STEP: delete the pod
Oct 23 10:08:27.067: INFO: Waiting for pod pod-configmaps-feb96726-7b4e-4236-84d0-b8f5da5ef145 to disappear
Oct 23 10:08:27.118: INFO: Pod pod-configmaps-feb96726-7b4e-4236-84d0-b8f5da5ef145 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:08:27.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4255" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":306,"completed":34,"skipped":514,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 68 lines ...
Oct 23 10:08:54.157: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"4206"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:08:54.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9445" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":306,"completed":35,"skipped":523,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Oct 23 10:08:54.630: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=kubectl-228 proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:08:54.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-228" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":306,"completed":36,"skipped":554,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Oct 23 10:08:55.284: INFO: stderr: ""
Oct 23 10:08:55.284: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"20+\", GitVersion:\"v1.20.0-alpha.3.84+3627a282799b32\", GitCommit:\"3627a282799b323d68c99f9a294b0fd211cd0725\", GitTreeState:\"clean\", BuildDate:\"2020-10-23T08:35:45Z\", GoVersion:\"go1.15.2\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"20+\", GitVersion:\"v1.20.0-alpha.3.84+3627a282799b32\", GitCommit:\"3627a282799b323d68c99f9a294b0fd211cd0725\", GitTreeState:\"clean\", BuildDate:\"2020-10-23T08:35:45Z\", GoVersion:\"go1.15.2\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:08:55.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9037" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":306,"completed":37,"skipped":555,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:09:00.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-620" for this suite.
STEP: Destroying namespace "webhook-620-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":306,"completed":38,"skipped":584,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Watchers
... skipping 17 lines ...
Oct 23 10:09:01.022: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8313  0f0817da-3fe5-4a8e-a607-68bf834d6da8 4302 0 2020-10-23 10:09:00 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-10-23 10:09:00 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Oct 23 10:09:01.022: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8313  0f0817da-3fe5-4a8e-a607-68bf834d6da8 4303 0 2020-10-23 10:09:00 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-10-23 10:09:00 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:09:01.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8313" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":306,"completed":39,"skipped":604,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 26 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:09:07.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6935" for this suite.
STEP: Destroying namespace "webhook-6935-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":306,"completed":40,"skipped":624,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Watchers
... skipping 22 lines ...
Oct 23 10:09:20.285: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2900  2e37e320-9fa5-45ef-bb46-c2c30020af5f 4426 0 2020-10-23 10:09:09 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-10-23 10:09:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Oct 23 10:09:20.285: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2900  2e37e320-9fa5-45ef-bb46-c2c30020af5f 4427 0 2020-10-23 10:09:09 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-10-23 10:09:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:09:20.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2900" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":306,"completed":41,"skipped":642,"failed":0}
SSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 23 10:09:20.602: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-335d25e6-b08c-4c94-9208-2ad2b91541e6" in namespace "security-context-test-3847" to be "Succeeded or Failed"
Oct 23 10:09:20.640: INFO: Pod "busybox-privileged-false-335d25e6-b08c-4c94-9208-2ad2b91541e6": Phase="Pending", Reason="", readiness=false. Elapsed: 38.510633ms
Oct 23 10:09:22.679: INFO: Pod "busybox-privileged-false-335d25e6-b08c-4c94-9208-2ad2b91541e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.077150374s
Oct 23 10:09:22.679: INFO: Pod "busybox-privileged-false-335d25e6-b08c-4c94-9208-2ad2b91541e6" satisfied condition "Succeeded or Failed"
Oct 23 10:09:22.722: INFO: Got logs for pod "busybox-privileged-false-335d25e6-b08c-4c94-9208-2ad2b91541e6": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:09:22.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3847" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":42,"skipped":647,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Job
... skipping 12 lines ...
Oct 23 10:09:27.329: INFO: Terminating Job.batch foo pods took: 100.269018ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:10:51.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-810" for this suite.
•{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":306,"completed":43,"skipped":654,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:11:03.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7378" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":306,"completed":44,"skipped":684,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] server version 
  should find the server version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] server version
... skipping 11 lines ...
Oct 23 10:11:03.432: INFO: cleanMinorVersion: 20
Oct 23 10:11:03.432: INFO: Minor version: 20+
[AfterEach] [sig-api-machinery] server version
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:11:03.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "server-version-7256" for this suite.
•{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":306,"completed":45,"skipped":693,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] 
  validates lower priority pod preemption by critical pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 17 lines ...
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:12:35.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-preemption-2559" for this suite.
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":306,"completed":46,"skipped":713,"failed":0}
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct 23 10:12:52.463: INFO: File wheezy_udp@dns-test-service-3.dns-61.svc.cluster.local from pod  dns-61/dns-test-00a55b93-f274-4989-9835-5ee317c9c6ba contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 23 10:12:52.532: INFO: File jessie_udp@dns-test-service-3.dns-61.svc.cluster.local from pod  dns-61/dns-test-00a55b93-f274-4989-9835-5ee317c9c6ba contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 23 10:12:52.532: INFO: Lookups using dns-61/dns-test-00a55b93-f274-4989-9835-5ee317c9c6ba failed for: [wheezy_udp@dns-test-service-3.dns-61.svc.cluster.local jessie_udp@dns-test-service-3.dns-61.svc.cluster.local]

Oct 23 10:12:57.572: INFO: File wheezy_udp@dns-test-service-3.dns-61.svc.cluster.local from pod  dns-61/dns-test-00a55b93-f274-4989-9835-5ee317c9c6ba contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 23 10:12:57.612: INFO: File jessie_udp@dns-test-service-3.dns-61.svc.cluster.local from pod  dns-61/dns-test-00a55b93-f274-4989-9835-5ee317c9c6ba contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 23 10:12:57.612: INFO: Lookups using dns-61/dns-test-00a55b93-f274-4989-9835-5ee317c9c6ba failed for: [wheezy_udp@dns-test-service-3.dns-61.svc.cluster.local jessie_udp@dns-test-service-3.dns-61.svc.cluster.local]

Oct 23 10:13:02.571: INFO: File wheezy_udp@dns-test-service-3.dns-61.svc.cluster.local from pod  dns-61/dns-test-00a55b93-f274-4989-9835-5ee317c9c6ba contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 23 10:13:02.610: INFO: File jessie_udp@dns-test-service-3.dns-61.svc.cluster.local from pod  dns-61/dns-test-00a55b93-f274-4989-9835-5ee317c9c6ba contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 23 10:13:02.610: INFO: Lookups using dns-61/dns-test-00a55b93-f274-4989-9835-5ee317c9c6ba failed for: [wheezy_udp@dns-test-service-3.dns-61.svc.cluster.local jessie_udp@dns-test-service-3.dns-61.svc.cluster.local]

Oct 23 10:13:07.571: INFO: File wheezy_udp@dns-test-service-3.dns-61.svc.cluster.local from pod  dns-61/dns-test-00a55b93-f274-4989-9835-5ee317c9c6ba contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 23 10:13:07.611: INFO: File jessie_udp@dns-test-service-3.dns-61.svc.cluster.local from pod  dns-61/dns-test-00a55b93-f274-4989-9835-5ee317c9c6ba contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 23 10:13:07.611: INFO: Lookups using dns-61/dns-test-00a55b93-f274-4989-9835-5ee317c9c6ba failed for: [wheezy_udp@dns-test-service-3.dns-61.svc.cluster.local jessie_udp@dns-test-service-3.dns-61.svc.cluster.local]

Oct 23 10:13:12.611: INFO: DNS probes using dns-test-00a55b93-f274-4989-9835-5ee317c9c6ba succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-61.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-61.svc.cluster.local; sleep 1; done
... skipping 9 lines ...
STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:13:15.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-61" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":306,"completed":47,"skipped":719,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 23 10:13:15.495: INFO: Waiting up to 5m0s for pod "downwardapi-volume-45626334-ebad-45be-bd8f-0a7d2b314ef3" in namespace "projected-8926" to be "Succeeded or Failed"
Oct 23 10:13:15.533: INFO: Pod "downwardapi-volume-45626334-ebad-45be-bd8f-0a7d2b314ef3": Phase="Pending", Reason="", readiness=false. Elapsed: 38.176626ms
Oct 23 10:13:17.604: INFO: Pod "downwardapi-volume-45626334-ebad-45be-bd8f-0a7d2b314ef3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.109718116s
STEP: Saw pod success
Oct 23 10:13:17.604: INFO: Pod "downwardapi-volume-45626334-ebad-45be-bd8f-0a7d2b314ef3" satisfied condition "Succeeded or Failed"
Oct 23 10:13:17.673: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod downwardapi-volume-45626334-ebad-45be-bd8f-0a7d2b314ef3 container client-container: <nil>
STEP: delete the pod
Oct 23 10:13:18.088: INFO: Waiting for pod downwardapi-volume-45626334-ebad-45be-bd8f-0a7d2b314ef3 to disappear
Oct 23 10:13:18.125: INFO: Pod downwardapi-volume-45626334-ebad-45be-bd8f-0a7d2b314ef3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:13:18.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8926" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":306,"completed":48,"skipped":727,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:13:34.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8363" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":306,"completed":49,"skipped":735,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 23 10:13:34.865: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0666 on tmpfs
Oct 23 10:13:35.086: INFO: Waiting up to 5m0s for pod "pod-cfe2e54b-0d3c-47fe-ba0b-4afd83c1b7aa" in namespace "emptydir-7034" to be "Succeeded or Failed"
Oct 23 10:13:35.122: INFO: Pod "pod-cfe2e54b-0d3c-47fe-ba0b-4afd83c1b7aa": Phase="Pending", Reason="", readiness=false. Elapsed: 36.057091ms
Oct 23 10:13:37.198: INFO: Pod "pod-cfe2e54b-0d3c-47fe-ba0b-4afd83c1b7aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.111448428s
STEP: Saw pod success
Oct 23 10:13:37.198: INFO: Pod "pod-cfe2e54b-0d3c-47fe-ba0b-4afd83c1b7aa" satisfied condition "Succeeded or Failed"
Oct 23 10:13:37.234: INFO: Trying to get logs from node bootstrap-e2e-minion-group-xbjm pod pod-cfe2e54b-0d3c-47fe-ba0b-4afd83c1b7aa container test-container: <nil>
STEP: delete the pod
Oct 23 10:13:37.550: INFO: Waiting for pod pod-cfe2e54b-0d3c-47fe-ba0b-4afd83c1b7aa to disappear
Oct 23 10:13:37.588: INFO: Pod pod-cfe2e54b-0d3c-47fe-ba0b-4afd83c1b7aa no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:13:37.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7034" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":50,"skipped":738,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:13:43.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7357" for this suite.
STEP: Destroying namespace "webhook-7357-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":306,"completed":51,"skipped":766,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] StatefulSet
... skipping 131 lines ...
Oct 23 10:14:39.249: INFO: ss-1  bootstrap-e2e-minion-group-xbjm  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-23 10:14:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-23 10:14:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-23 10:14:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-23 10:14:05 +0000 UTC  }]
Oct 23 10:14:39.249: INFO: 
Oct 23 10:14:39.249: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5562
Oct 23 10:14:40.309: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:14:40.715: INFO: rc: 1
Oct 23 10:14:40.715: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Oct 23 10:14:50.715: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:14:50.934: INFO: rc: 1
Oct 23 10:14:50.934: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:15:00.934: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:15:01.165: INFO: rc: 1
Oct 23 10:15:01.165: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:15:11.165: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:15:11.436: INFO: rc: 1
Oct 23 10:15:11.436: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:15:21.436: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:15:21.676: INFO: rc: 1
Oct 23 10:15:21.676: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:15:31.676: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:15:31.932: INFO: rc: 1
Oct 23 10:15:31.932: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:15:41.932: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:15:42.180: INFO: rc: 1
Oct 23 10:15:42.180: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:15:52.181: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:15:52.431: INFO: rc: 1
Oct 23 10:15:52.431: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:16:02.431: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:16:02.657: INFO: rc: 1
Oct 23 10:16:02.657: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:16:12.657: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:16:12.882: INFO: rc: 1
Oct 23 10:16:12.882: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:16:22.882: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:16:23.106: INFO: rc: 1
Oct 23 10:16:23.106: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:16:33.106: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:16:33.769: INFO: rc: 1
Oct 23 10:16:33.769: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:16:43.769: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:16:43.996: INFO: rc: 1
Oct 23 10:16:43.996: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:16:53.997: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:16:54.694: INFO: rc: 1
Oct 23 10:16:54.694: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:17:04.694: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:17:04.927: INFO: rc: 1
Oct 23 10:17:04.927: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:17:14.927: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:17:15.143: INFO: rc: 1
Oct 23 10:17:15.144: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:17:25.144: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:17:25.676: INFO: rc: 1
Oct 23 10:17:25.676: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:17:35.677: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:17:35.904: INFO: rc: 1
Oct 23 10:17:35.904: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:17:45.904: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:17:46.140: INFO: rc: 1
Oct 23 10:17:46.140: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:17:56.141: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:17:56.386: INFO: rc: 1
Oct 23 10:17:56.386: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:18:06.387: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:18:06.646: INFO: rc: 1
Oct 23 10:18:06.646: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:18:16.646: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:18:16.875: INFO: rc: 1
Oct 23 10:18:16.875: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:18:26.876: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:18:27.164: INFO: rc: 1
Oct 23 10:18:27.165: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:18:37.165: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:18:37.392: INFO: rc: 1
Oct 23 10:18:37.392: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:18:47.392: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:18:47.618: INFO: rc: 1
Oct 23 10:18:47.618: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:18:57.618: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:18:57.981: INFO: rc: 1
Oct 23 10:18:57.982: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:19:07.982: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:19:08.349: INFO: rc: 1
Oct 23 10:19:08.349: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:19:18.350: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:19:18.595: INFO: rc: 1
Oct 23 10:19:18.595: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:19:28.595: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:19:28.836: INFO: rc: 1
Oct 23 10:19:28.836: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:19:38.837: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:19:39.080: INFO: rc: 1
Oct 23 10:19:39.080: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Oct 23 10:19:49.080: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5562 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:19:49.322: INFO: rc: 1
Oct 23 10:19:49.322: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
Oct 23 10:19:49.322: INFO: Scaling statefulset ss to 0
Oct 23 10:19:49.537: INFO: Waiting for statefulset status.replicas updated to 0
... skipping 13 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":306,"completed":52,"skipped":777,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating pod pod-subpath-test-projected-tf57
STEP: Creating a pod to test atomic-volume-subpath
Oct 23 10:19:50.406: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-tf57" in namespace "subpath-1027" to be "Succeeded or Failed"
Oct 23 10:19:50.446: INFO: Pod "pod-subpath-test-projected-tf57": Phase="Pending", Reason="", readiness=false. Elapsed: 40.023622ms
Oct 23 10:19:52.580: INFO: Pod "pod-subpath-test-projected-tf57": Phase="Running", Reason="", readiness=true. Elapsed: 2.173759403s
Oct 23 10:19:54.619: INFO: Pod "pod-subpath-test-projected-tf57": Phase="Running", Reason="", readiness=true. Elapsed: 4.212180041s
Oct 23 10:19:56.658: INFO: Pod "pod-subpath-test-projected-tf57": Phase="Running", Reason="", readiness=true. Elapsed: 6.251137299s
Oct 23 10:19:58.743: INFO: Pod "pod-subpath-test-projected-tf57": Phase="Running", Reason="", readiness=true. Elapsed: 8.336536135s
Oct 23 10:20:00.782: INFO: Pod "pod-subpath-test-projected-tf57": Phase="Running", Reason="", readiness=true. Elapsed: 10.37549372s
Oct 23 10:20:02.821: INFO: Pod "pod-subpath-test-projected-tf57": Phase="Running", Reason="", readiness=true. Elapsed: 12.414617214s
Oct 23 10:20:04.860: INFO: Pod "pod-subpath-test-projected-tf57": Phase="Running", Reason="", readiness=true. Elapsed: 14.453984463s
Oct 23 10:20:06.899: INFO: Pod "pod-subpath-test-projected-tf57": Phase="Running", Reason="", readiness=true. Elapsed: 16.492667285s
Oct 23 10:20:08.937: INFO: Pod "pod-subpath-test-projected-tf57": Phase="Running", Reason="", readiness=true. Elapsed: 18.531022254s
Oct 23 10:20:10.986: INFO: Pod "pod-subpath-test-projected-tf57": Phase="Running", Reason="", readiness=true. Elapsed: 20.579931471s
Oct 23 10:20:13.025: INFO: Pod "pod-subpath-test-projected-tf57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.618439535s
STEP: Saw pod success
Oct 23 10:20:13.025: INFO: Pod "pod-subpath-test-projected-tf57" satisfied condition "Succeeded or Failed"
Oct 23 10:20:13.063: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-subpath-test-projected-tf57 container test-container-subpath-projected-tf57: <nil>
STEP: delete the pod
Oct 23 10:20:13.165: INFO: Waiting for pod pod-subpath-test-projected-tf57 to disappear
Oct 23 10:20:13.201: INFO: Pod pod-subpath-test-projected-tf57 no longer exists
STEP: Deleting pod pod-subpath-test-projected-tf57
Oct 23 10:20:13.201: INFO: Deleting pod "pod-subpath-test-projected-tf57" in namespace "subpath-1027"
[AfterEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:20:13.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1027" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":306,"completed":53,"skipped":793,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Oct 23 10:20:15.701: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:20:15.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6733" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":306,"completed":54,"skipped":802,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 23 10:20:15.864: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name secret-emptykey-test-21aa7da2-3ac1-4932-a9b3-264b0c751f54
[AfterEach] [sig-api-machinery] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:20:16.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1732" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":306,"completed":55,"skipped":807,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Updating configmap configmap-test-upd-62efb594-22d6-4b14-9575-6bad90da700e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:20:20.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1880" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":306,"completed":56,"skipped":865,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Service endpoints latency
... skipping 418 lines ...
Oct 23 10:20:35.277: INFO: 99 %ile: 1.970384643s
Oct 23 10:20:35.277: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:20:35.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-2367" for this suite.
•{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":306,"completed":57,"skipped":884,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...
Oct 23 10:20:37.944: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 23 10:20:38.327: INFO: Deleting pod test-dns-nameservers...
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:20:38.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-890" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":306,"completed":58,"skipped":897,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 19 lines ...
Oct 23 10:20:52.959: INFO: stderr: ""
Oct 23 10:20:52.959: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:20:52.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-455" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":306,"completed":59,"skipped":926,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 23 10:20:53.697: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e9447d94-ed4c-4368-a13f-fdae2a50ee96" in namespace "projected-439" to be "Succeeded or Failed"
Oct 23 10:20:53.876: INFO: Pod "downwardapi-volume-e9447d94-ed4c-4368-a13f-fdae2a50ee96": Phase="Pending", Reason="", readiness=false. Elapsed: 179.66999ms
Oct 23 10:20:55.934: INFO: Pod "downwardapi-volume-e9447d94-ed4c-4368-a13f-fdae2a50ee96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237187s
Oct 23 10:20:57.994: INFO: Pod "downwardapi-volume-e9447d94-ed4c-4368-a13f-fdae2a50ee96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.297454858s
STEP: Saw pod success
Oct 23 10:20:57.994: INFO: Pod "downwardapi-volume-e9447d94-ed4c-4368-a13f-fdae2a50ee96" satisfied condition "Succeeded or Failed"
Oct 23 10:20:58.052: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod downwardapi-volume-e9447d94-ed4c-4368-a13f-fdae2a50ee96 container client-container: <nil>
STEP: delete the pod
Oct 23 10:20:58.435: INFO: Waiting for pod downwardapi-volume-e9447d94-ed4c-4368-a13f-fdae2a50ee96 to disappear
Oct 23 10:20:58.533: INFO: Pod downwardapi-volume-e9447d94-ed4c-4368-a13f-fdae2a50ee96 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:20:58.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-439" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":306,"completed":60,"skipped":940,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Oct 23 10:20:58.782: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward api env vars
Oct 23 10:20:59.642: INFO: Waiting up to 5m0s for pod "downward-api-242a7744-70ec-468a-813a-85ef81453c0f" in namespace "downward-api-4963" to be "Succeeded or Failed"
Oct 23 10:20:59.829: INFO: Pod "downward-api-242a7744-70ec-468a-813a-85ef81453c0f": Phase="Pending", Reason="", readiness=false. Elapsed: 187.441655ms
Oct 23 10:21:01.866: INFO: Pod "downward-api-242a7744-70ec-468a-813a-85ef81453c0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.22408558s
STEP: Saw pod success
Oct 23 10:21:01.866: INFO: Pod "downward-api-242a7744-70ec-468a-813a-85ef81453c0f" satisfied condition "Succeeded or Failed"
Oct 23 10:21:01.902: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod downward-api-242a7744-70ec-468a-813a-85ef81453c0f container dapi-container: <nil>
STEP: delete the pod
Oct 23 10:21:01.987: INFO: Waiting for pod downward-api-242a7744-70ec-468a-813a-85ef81453c0f to disappear
Oct 23 10:21:02.024: INFO: Pod downward-api-242a7744-70ec-468a-813a-85ef81453c0f no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:21:02.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4963" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":306,"completed":61,"skipped":960,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:21:11.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8200" for this suite.
STEP: Destroying namespace "webhook-8200-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":306,"completed":62,"skipped":961,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 28 lines ...
Oct 23 10:23:10.066: INFO: Terminating ReplicationController wrapped-volume-race-720df8f7-60f1-46e3-9208-afeec3f355d4 pods took: 800.410286ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:23:26.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9773" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":306,"completed":63,"skipped":970,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 8 lines ...
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:23:28.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9760" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":306,"completed":64,"skipped":1037,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:23:46.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-866" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":306,"completed":65,"skipped":1049,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Oct 23 10:23:50.980: INFO: Initial restart count of pod test-webserver-7b18277d-f72d-4b16-b83d-43fb798150ba is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:27:52.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4582" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":306,"completed":66,"skipped":1089,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
Oct 23 10:27:55.662: INFO: Successfully updated pod "labelsupdate7191b3ac-7906-403a-81d2-ec16b08d90dd"
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:27:59.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-459" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":306,"completed":67,"skipped":1105,"failed":0}

------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Oct 23 10:28:00.060: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:28:03.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6691" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":306,"completed":68,"skipped":1105,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Pods
... skipping 9 lines ...
STEP: creating pod
Oct 23 10:28:05.918: INFO: Pod pod-hostip-981c62b5-73b4-465c-a3bd-da007858a0db has hostIP: 10.138.0.4
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:28:05.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8404" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":306,"completed":69,"skipped":1135,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes control plane services is included in cluster-info  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Oct 23 10:28:06.704: INFO: stderr: ""
Oct 23 10:28:06.704: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://34.82.199.23\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:28:06.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2529" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":306,"completed":70,"skipped":1161,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:28:13.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-699" for this suite.
STEP: Destroying namespace "webhook-699-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":306,"completed":71,"skipped":1200,"failed":0}
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] StatefulSet
... skipping 79 lines ...
Oct 23 10:31:32.007: INFO: Waiting for statefulset status.replicas updated to 0
Oct 23 10:31:32.045: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:31:32.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7024" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":306,"completed":72,"skipped":1202,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 23 10:31:32.529: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c6bf4c69-ef0d-4781-b908-68854731871d" in namespace "projected-4140" to be "Succeeded or Failed"
Oct 23 10:31:32.584: INFO: Pod "downwardapi-volume-c6bf4c69-ef0d-4781-b908-68854731871d": Phase="Pending", Reason="", readiness=false. Elapsed: 54.615131ms
Oct 23 10:31:34.635: INFO: Pod "downwardapi-volume-c6bf4c69-ef0d-4781-b908-68854731871d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.105684793s
STEP: Saw pod success
Oct 23 10:31:34.635: INFO: Pod "downwardapi-volume-c6bf4c69-ef0d-4781-b908-68854731871d" satisfied condition "Succeeded or Failed"
Oct 23 10:31:34.678: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod downwardapi-volume-c6bf4c69-ef0d-4781-b908-68854731871d container client-container: <nil>
STEP: delete the pod
Oct 23 10:31:34.906: INFO: Waiting for pod downwardapi-volume-c6bf4c69-ef0d-4781-b908-68854731871d to disappear
Oct 23 10:31:34.943: INFO: Pod downwardapi-volume-c6bf4c69-ef0d-4781-b908-68854731871d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:31:34.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4140" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":306,"completed":73,"skipped":1207,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should run through the lifecycle of a ServiceAccount [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 10 lines ...
STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector)
STEP: deleting the ServiceAccount
[AfterEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:31:35.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-356" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":306,"completed":74,"skipped":1253,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] StatefulSet
... skipping 85 lines ...
Oct 23 10:32:42.173: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Oct 23 10:32:42.173: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Oct 23 10:32:42.173: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Oct 23 10:32:42.173: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:32:42.802: INFO: rc: 1
Oct 23 10:32:42.802: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: Internal error occurred: error executing command in container: failed to exec in container: container is in CONTAINER_EXITED state

error:
exit status 1
Oct 23 10:32:52.803: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:32:53.027: INFO: rc: 1
Oct 23 10:32:53.027: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:33:03.027: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:33:03.251: INFO: rc: 1
Oct 23 10:33:03.251: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:33:13.252: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:33:13.501: INFO: rc: 1
Oct 23 10:33:13.501: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:33:23.502: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:33:23.731: INFO: rc: 1
Oct 23 10:33:23.731: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:33:33.731: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:33:34.351: INFO: rc: 1
Oct 23 10:33:34.351: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:33:44.352: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:33:44.573: INFO: rc: 1
Oct 23 10:33:44.573: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:33:54.573: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:33:54.879: INFO: rc: 1
Oct 23 10:33:54.879: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:34:04.880: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:34:05.110: INFO: rc: 1
Oct 23 10:34:05.110: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:34:15.110: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:34:15.360: INFO: rc: 1
Oct 23 10:34:15.360: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:34:25.360: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:34:25.695: INFO: rc: 1
Oct 23 10:34:25.696: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:34:35.696: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:34:35.928: INFO: rc: 1
Oct 23 10:34:35.928: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:34:45.929: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:34:46.158: INFO: rc: 1
Oct 23 10:34:46.158: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:34:56.159: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:34:56.383: INFO: rc: 1
Oct 23 10:34:56.383: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:35:06.383: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:35:06.620: INFO: rc: 1
Oct 23 10:35:06.620: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:35:16.620: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:35:16.850: INFO: rc: 1
Oct 23 10:35:16.850: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:35:26.850: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:35:27.087: INFO: rc: 1
Oct 23 10:35:27.088: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:35:37.088: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:35:37.362: INFO: rc: 1
Oct 23 10:35:37.362: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:35:47.363: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:35:47.588: INFO: rc: 1
Oct 23 10:35:47.588: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:35:57.588: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:35:57.817: INFO: rc: 1
Oct 23 10:35:57.817: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:36:07.818: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:36:08.095: INFO: rc: 1
Oct 23 10:36:08.095: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:36:18.095: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:36:18.314: INFO: rc: 1
Oct 23 10:36:18.314: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:36:28.314: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:36:28.569: INFO: rc: 1
Oct 23 10:36:28.569: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:36:38.569: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:36:38.798: INFO: rc: 1
Oct 23 10:36:38.798: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:36:48.799: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:36:49.034: INFO: rc: 1
Oct 23 10:36:49.034: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:36:59.035: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:36:59.269: INFO: rc: 1
Oct 23 10:36:59.269: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:37:09.269: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:37:09.487: INFO: rc: 1
Oct 23 10:37:09.487: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:37:19.488: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:37:19.759: INFO: rc: 1
Oct 23 10:37:19.759: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:37:29.759: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:37:29.982: INFO: rc: 1
Oct 23 10:37:29.982: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:37:39.982: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:37:40.214: INFO: rc: 1
Oct 23 10:37:40.214: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 23 10:37:50.214: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1059 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 23 10:37:50.565: INFO: rc: 1
Oct 23 10:37:50.565: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Oct 23 10:37:50.565: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
... skipping 13 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":306,"completed":75,"skipped":1285,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Deployment
... skipping 44 lines ...
Oct 23 10:38:13.719: INFO: Pod "test-rollover-deployment-668db69979-7dr4m" is available:
&Pod{ObjectMeta:{test-rollover-deployment-668db69979-7dr4m test-rollover-deployment-668db69979- deployment-3106  18eb6a9d-b44a-4ddb-a018-37d1852c2ecb 11286 0 2020-10-23 10:38:01 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:668db69979] map[] [{apps/v1 ReplicaSet test-rollover-deployment-668db69979 0e4ecd78-f6da-4788-92dd-e6a325d14c8c 0xc00206b2f7 0xc00206b2f8}] []  [{kube-controller-manager Update v1 2020-10-23 10:38:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0e4ecd78-f6da-4788-92dd-e6a325d14c8c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-23 10:38:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.2.44\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nzxzx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nzxzx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nzxzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-xbjm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-23 10:38:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-23 10:38:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-23 10:38:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-23 10:38:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.3,PodIP:10.64.2.44,StartTime:2020-10-23 10:38:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-23 10:38:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://61de61138d5251d345f8ea6750291b4f822d5c270ef14c870ef0c4be0640990f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.2.44,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:38:13.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3106" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":306,"completed":76,"skipped":1298,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] ReplicationController
... skipping 13 lines ...
Oct 23 10:38:18.305: INFO: Trying to dial the pod
Oct 23 10:38:23.425: INFO: Controller my-hostname-basic-e5c17ea7-f6a7-4a1e-9f46-7e2850b1d55b: Got expected result from replica 1 [my-hostname-basic-e5c17ea7-f6a7-4a1e-9f46-7e2850b1d55b-26kjb]: "my-hostname-basic-e5c17ea7-f6a7-4a1e-9f46-7e2850b1d55b-26kjb", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:38:23.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4971" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":306,"completed":77,"skipped":1314,"failed":0}
S
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 10 lines ...
STEP: creating replication controller nodeport-test in namespace services-1764
I1023 10:38:23.790925  144144 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-1764, replica count: 2
Oct 23 10:38:26.841: INFO: Creating new exec pod
I1023 10:38:26.841351  144144 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 23 10:38:30.137: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-1764 exec execpodjvwsq -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Oct 23 10:38:31.962: INFO: rc: 1
Oct 23 10:38:31.962: INFO: Service reachability failing with error: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-1764 exec execpodjvwsq -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 nodeport-test 80
nc: connect to nodeport-test port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 23 10:38:32.962: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-1764 exec execpodjvwsq -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Oct 23 10:38:34.772: INFO: stderr: "+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n"
Oct 23 10:38:34.772: INFO: stdout: ""
Oct 23 10:38:34.773: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-1764 exec execpodjvwsq -- /bin/sh -x -c nc -zv -t -w 2 10.0.250.140 80'
... skipping 14 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:38:37.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1764" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":306,"completed":78,"skipped":1315,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 25 lines ...
Oct 23 10:38:54.020: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Oct 23 10:38:54.059: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:38:54.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-489" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":306,"completed":79,"skipped":1332,"failed":0}
SSS
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Secrets
... skipping 10 lines ...
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:38:54.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4454" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":306,"completed":80,"skipped":1335,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 23 10:38:55.021: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5e9baac8-1d1f-46b0-9fbe-dad82633065b" in namespace "projected-7561" to be "Succeeded or Failed"
Oct 23 10:38:55.086: INFO: Pod "downwardapi-volume-5e9baac8-1d1f-46b0-9fbe-dad82633065b": Phase="Pending", Reason="", readiness=false. Elapsed: 64.831869ms
Oct 23 10:38:57.124: INFO: Pod "downwardapi-volume-5e9baac8-1d1f-46b0-9fbe-dad82633065b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.102751862s
STEP: Saw pod success
Oct 23 10:38:57.124: INFO: Pod "downwardapi-volume-5e9baac8-1d1f-46b0-9fbe-dad82633065b" satisfied condition "Succeeded or Failed"
Oct 23 10:38:57.163: INFO: Trying to get logs from node bootstrap-e2e-minion-group-xbjm pod downwardapi-volume-5e9baac8-1d1f-46b0-9fbe-dad82633065b container client-container: <nil>
STEP: delete the pod
Oct 23 10:38:57.292: INFO: Waiting for pod downwardapi-volume-5e9baac8-1d1f-46b0-9fbe-dad82633065b to disappear
Oct 23 10:38:57.330: INFO: Pod downwardapi-volume-5e9baac8-1d1f-46b0-9fbe-dad82633065b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:38:57.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7561" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":81,"skipped":1340,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 23 10:38:57.413: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: creating the pod
Oct 23 10:38:57.602: INFO: PodSpec: initContainers in spec.initContainers
Oct 23 10:39:47.830: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c4aed199-c62c-4ff9-852b-732fb9a14846", GenerateName:"", Namespace:"init-container-1993", SelfLink:"", UID:"2eef196b-2551-4847-a9c9-c5e4df3c3ed4", ResourceVersion:"11651", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63739046337, loc:(*time.Location)(0x774f580)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"602102679"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001f4e3e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001f4e440)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001f4e460), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001f4e480)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-nbmkh", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002162380), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nbmkh", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nbmkh", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nbmkh", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001ece628), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"bootstrap-e2e-minion-group-xbjm", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001f70230), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001ece780)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001ece7a0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001ece7a8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001ece7ac), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc001f600d0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739046337, loc:(*time.Location)(0x774f580)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739046337, loc:(*time.Location)(0x774f580)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739046337, loc:(*time.Location)(0x774f580)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739046337, loc:(*time.Location)(0x774f580)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.3", PodIP:"10.64.2.48", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.64.2.48"}}, StartTime:(*v1.Time)(0xc001f4e4a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001f70310)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001f70380)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://a4780534bda8b49d88b4bb28634b1e6ad927e7b41fdb90b1dce94b4c9df4161d", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001f4e4e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001f4e4c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc001ece83f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:39:47.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1993" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":306,"completed":82,"skipped":1355,"failed":0}

------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 11 lines ...
STEP: creating replication controller externalname-service in namespace services-5332
I1023 10:39:48.269332  144144 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5332, replica count: 2
Oct 23 10:39:51.319: INFO: Creating new exec pod
I1023 10:39:51.319716  144144 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 23 10:39:54.531: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-5332 exec execpodn4tjt -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Oct 23 10:39:56.054: INFO: rc: 1
Oct 23 10:39:56.054: INFO: Service reachability failing with error: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-5332 exec execpodn4tjt -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 23 10:39:57.054: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-5332 exec execpodn4tjt -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Oct 23 10:39:58.571: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
Oct 23 10:39:58.571: INFO: stdout: ""
Oct 23 10:39:58.572: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-5332 exec execpodn4tjt -- /bin/sh -x -c nc -zv -t -w 2 10.0.71.112 80'
... skipping 15 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:40:01.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5332" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":306,"completed":83,"skipped":1355,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Oct 23 10:40:10.421: INFO: stderr: ""
Oct 23 10:40:10.421: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9771-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:40:16.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4735" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":306,"completed":84,"skipped":1358,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:40:20.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9401" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":306,"completed":85,"skipped":1367,"failed":0}
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 67 lines ...
Oct 23 10:40:44.259: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"11911"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:40:44.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5062" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":306,"completed":86,"skipped":1368,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Oct 23 10:40:44.631: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward api env vars
Oct 23 10:40:45.070: INFO: Waiting up to 5m0s for pod "downward-api-b28d481e-0354-4c31-b6b5-ec4066e94061" in namespace "downward-api-2851" to be "Succeeded or Failed"
Oct 23 10:40:45.107: INFO: Pod "downward-api-b28d481e-0354-4c31-b6b5-ec4066e94061": Phase="Pending", Reason="", readiness=false. Elapsed: 36.927077ms
Oct 23 10:40:47.145: INFO: Pod "downward-api-b28d481e-0354-4c31-b6b5-ec4066e94061": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.07443674s
STEP: Saw pod success
Oct 23 10:40:47.145: INFO: Pod "downward-api-b28d481e-0354-4c31-b6b5-ec4066e94061" satisfied condition "Succeeded or Failed"
Oct 23 10:40:47.182: INFO: Trying to get logs from node bootstrap-e2e-minion-group-xbjm pod downward-api-b28d481e-0354-4c31-b6b5-ec4066e94061 container dapi-container: <nil>
STEP: delete the pod
Oct 23 10:40:47.282: INFO: Waiting for pod downward-api-b28d481e-0354-4c31-b6b5-ec4066e94061 to disappear
Oct 23 10:40:47.319: INFO: Pod downward-api-b28d481e-0354-4c31-b6b5-ec4066e94061 no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:40:47.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2851" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":306,"completed":87,"skipped":1427,"failed":0}
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-2446340d-7aa2-46bb-afef-9d51bfde3dd3
STEP: Creating a pod to test consume secrets
Oct 23 10:40:47.671: INFO: Waiting up to 5m0s for pod "pod-secrets-738af2e0-e653-4559-bcbd-a8d8c60f4f73" in namespace "secrets-4453" to be "Succeeded or Failed"
Oct 23 10:40:47.709: INFO: Pod "pod-secrets-738af2e0-e653-4559-bcbd-a8d8c60f4f73": Phase="Pending", Reason="", readiness=false. Elapsed: 37.321614ms
Oct 23 10:40:49.902: INFO: Pod "pod-secrets-738af2e0-e653-4559-bcbd-a8d8c60f4f73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230434428s
Oct 23 10:40:51.941: INFO: Pod "pod-secrets-738af2e0-e653-4559-bcbd-a8d8c60f4f73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.269345278s
STEP: Saw pod success
Oct 23 10:40:51.941: INFO: Pod "pod-secrets-738af2e0-e653-4559-bcbd-a8d8c60f4f73" satisfied condition "Succeeded or Failed"
Oct 23 10:40:51.981: INFO: Trying to get logs from node bootstrap-e2e-minion-group-xbjm pod pod-secrets-738af2e0-e653-4559-bcbd-a8d8c60f4f73 container secret-volume-test: <nil>
STEP: delete the pod
Oct 23 10:40:52.084: INFO: Waiting for pod pod-secrets-738af2e0-e653-4559-bcbd-a8d8c60f4f73 to disappear
Oct 23 10:40:52.121: INFO: Pod pod-secrets-738af2e0-e653-4559-bcbd-a8d8c60f4f73 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:40:52.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4453" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":306,"completed":88,"skipped":1428,"failed":0}
SSSSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 11 lines ...
STEP: creating replication controller affinity-clusterip-transition in namespace services-472
I1023 10:40:52.704116  144144 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-472, replica count: 3
I1023 10:40:55.804618  144144 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 23 10:40:55.879: INFO: Creating new exec pod
Oct 23 10:40:59.091: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-472 exec execpod-affinityc2cxw -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80'
Oct 23 10:41:00.562: INFO: rc: 1
Oct 23 10:41:00.562: INFO: Service reachability failing with error: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-472 exec execpod-affinityc2cxw -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-transition 80
nc: connect to affinity-clusterip-transition port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 23 10:41:01.562: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-472 exec execpod-affinityc2cxw -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80'
Oct 23 10:41:03.060: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n"
Oct 23 10:41:03.060: INFO: stdout: ""
Oct 23 10:41:03.061: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-472 exec execpod-affinityc2cxw -- /bin/sh -x -c nc -zv -t -w 2 10.0.142.69 80'
... skipping 63 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:41:44.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-472" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":306,"completed":89,"skipped":1433,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:41:49.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2259" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":306,"completed":90,"skipped":1465,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:41:55.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-459" for this suite.
STEP: Destroying namespace "webhook-459-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":306,"completed":91,"skipped":1477,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should test the lifecycle of a ReplicationController [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] ReplicationController
... skipping 26 lines ...
STEP: deleting ReplicationControllers by collection
STEP: waiting for ReplicationController to have a DELETED watchEvent
[AfterEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:42:12.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2753" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":306,"completed":92,"skipped":1504,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Networking
... skipping 39 lines ...
Oct 23 10:42:34.354: INFO: reached 10.64.2.55 after 0/1 tries
Oct 23 10:42:34.354: INFO: Going to retry 0 out of 3 pods....
[AfterEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:42:34.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2238" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":306,"completed":93,"skipped":1548,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 23 10:42:34.677: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05f491e2-a128-4f79-b3b5-b5c75febfec1" in namespace "projected-4337" to be "Succeeded or Failed"
Oct 23 10:42:34.717: INFO: Pod "downwardapi-volume-05f491e2-a128-4f79-b3b5-b5c75febfec1": Phase="Pending", Reason="", readiness=false. Elapsed: 39.221702ms
Oct 23 10:42:36.755: INFO: Pod "downwardapi-volume-05f491e2-a128-4f79-b3b5-b5c75febfec1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.077986489s
STEP: Saw pod success
Oct 23 10:42:36.755: INFO: Pod "downwardapi-volume-05f491e2-a128-4f79-b3b5-b5c75febfec1" satisfied condition "Succeeded or Failed"
Oct 23 10:42:36.794: INFO: Trying to get logs from node bootstrap-e2e-minion-group-xbjm pod downwardapi-volume-05f491e2-a128-4f79-b3b5-b5c75febfec1 container client-container: <nil>
STEP: delete the pod
Oct 23 10:42:36.892: INFO: Waiting for pod downwardapi-volume-05f491e2-a128-4f79-b3b5-b5c75febfec1 to disappear
Oct 23 10:42:36.929: INFO: Pod downwardapi-volume-05f491e2-a128-4f79-b3b5-b5c75febfec1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:42:36.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4337" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":94,"skipped":1550,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 160 lines ...
Oct 23 10:42:39.523: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2439 create -f -'
Oct 23 10:42:39.990: INFO: stderr: ""
Oct 23 10:42:39.990: INFO: stdout: "deployment.apps/agnhost-replica created\n"
STEP: validating guestbook app
Oct 23 10:42:39.990: INFO: Waiting for all frontend pods to be Running.
Oct 23 10:42:45.140: INFO: Waiting for frontend to serve content.
Oct 23 10:42:46.251: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 
Oct 23 10:42:51.298: INFO: Trying to add a new entry to the guestbook.
Oct 23 10:42:51.350: INFO: Verifying that added entry can be retrieved.
Oct 23 10:42:51.399: INFO: Failed to get response from guestbook. err: <nil>, response: {"data":""}
STEP: using delete to clean up resources
Oct 23 10:42:56.456: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2439 delete --grace-period=0 --force -f -'
Oct 23 10:42:56.723: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Oct 23 10:42:56.723: INFO: stdout: "service \"agnhost-replica\" force deleted\n"
STEP: using delete to clean up resources
Oct 23 10:42:56.724: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2439 delete --grace-period=0 --force -f -'
... skipping 16 lines ...
Oct 23 10:42:57.943: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Oct 23 10:42:57.943: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:42:57.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2439" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":306,"completed":95,"skipped":1559,"failed":0}
SSS
------------------------------
[k8s.io] Variable Expansion 
  should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Variable Expansion
... skipping 19 lines ...
Oct 23 10:43:01.711: INFO: Deleting pod "var-expansion-8e812475-e361-4c84-ba4f-992dcec0d808" in namespace "var-expansion-5207"
Oct 23 10:43:01.755: INFO: Wait up to 5m0s for pod "var-expansion-8e812475-e361-4c84-ba4f-992dcec0d808" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:43:41.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5207" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":306,"completed":96,"skipped":1562,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
Oct 23 10:44:00.774: INFO: Restart count of pod container-probe-4968/liveness-a7f85e45-da81-43a3-9d3a-4d045fe400ee is now 1 (16.433962885s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:44:00.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4968" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":306,"completed":97,"skipped":1570,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 10 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:44:01.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-915" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":306,"completed":98,"skipped":1585,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 26 lines ...
Oct 23 10:44:04.183: INFO: Selector matched 1 pods for map[app:agnhost]
Oct 23 10:44:04.183: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:44:04.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4877" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":306,"completed":99,"skipped":1657,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 23 10:44:04.746: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e756cb5-397d-4991-9b43-2afa6a771437" in namespace "downward-api-6097" to be "Succeeded or Failed"
Oct 23 10:44:04.804: INFO: Pod "downwardapi-volume-8e756cb5-397d-4991-9b43-2afa6a771437": Phase="Pending", Reason="", readiness=false. Elapsed: 57.96361ms
Oct 23 10:44:06.850: INFO: Pod "downwardapi-volume-8e756cb5-397d-4991-9b43-2afa6a771437": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.103551502s
STEP: Saw pod success
Oct 23 10:44:06.850: INFO: Pod "downwardapi-volume-8e756cb5-397d-4991-9b43-2afa6a771437" satisfied condition "Succeeded or Failed"
Oct 23 10:44:06.894: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod downwardapi-volume-8e756cb5-397d-4991-9b43-2afa6a771437 container client-container: <nil>
STEP: delete the pod
Oct 23 10:44:07.038: INFO: Waiting for pod downwardapi-volume-8e756cb5-397d-4991-9b43-2afa6a771437 to disappear
Oct 23 10:44:07.077: INFO: Pod downwardapi-volume-8e756cb5-397d-4991-9b43-2afa6a771437 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:44:07.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6097" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":306,"completed":100,"skipped":1721,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Job
... skipping 19 lines ...
Oct 23 10:44:12.745: INFO: Pod "adopt-release-nnf2d": Phase="Running", Reason="", readiness=true. Elapsed: 43.10505ms
Oct 23 10:44:12.745: INFO: Pod "adopt-release-nnf2d" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:44:12.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4846" for this suite.
•{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":306,"completed":101,"skipped":1731,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] PodTemplates 
  should delete a collection of pod templates [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] PodTemplates
... skipping 14 lines ...
STEP: check that the list of pod templates matches the requested quantity
Oct 23 10:44:13.253: INFO: requesting list of pod templates to confirm quantity
[AfterEach] [sig-node] PodTemplates
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:44:13.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-8614" for this suite.
•{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":306,"completed":102,"skipped":1765,"failed":0}

------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 155 lines ...
Oct 23 10:44:48.141: INFO: stderr: ""
Oct 23 10:44:48.141: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:44:48.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3999" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":306,"completed":103,"skipped":1765,"failed":0}
SS
------------------------------
[sig-network] Services 
  should have session affinity work for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 57 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:45:14.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4266" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":306,"completed":104,"skipped":1767,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Ingress API 
  should support creating Ingress API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Ingress API
... skipping 25 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] Ingress API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:45:15.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-2071" for this suite.
•{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":306,"completed":105,"skipped":1779,"failed":0}
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-map-8b85605c-b63f-410e-a6b7-5046e724ef01
STEP: Creating a pod to test consume configMaps
Oct 23 10:45:15.966: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-31bae8c0-673a-4359-b6d4-b1617923dfb4" in namespace "projected-6410" to be "Succeeded or Failed"
Oct 23 10:45:16.004: INFO: Pod "pod-projected-configmaps-31bae8c0-673a-4359-b6d4-b1617923dfb4": Phase="Pending", Reason="", readiness=false. Elapsed: 37.404139ms
Oct 23 10:45:18.047: INFO: Pod "pod-projected-configmaps-31bae8c0-673a-4359-b6d4-b1617923dfb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.080568044s
STEP: Saw pod success
Oct 23 10:45:18.047: INFO: Pod "pod-projected-configmaps-31bae8c0-673a-4359-b6d4-b1617923dfb4" satisfied condition "Succeeded or Failed"
Oct 23 10:45:18.121: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-projected-configmaps-31bae8c0-673a-4359-b6d4-b1617923dfb4 container agnhost-container: <nil>
STEP: delete the pod
Oct 23 10:45:18.435: INFO: Waiting for pod pod-projected-configmaps-31bae8c0-673a-4359-b6d4-b1617923dfb4 to disappear
Oct 23 10:45:18.582: INFO: Pod pod-projected-configmaps-31bae8c0-673a-4359-b6d4-b1617923dfb4 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:45:18.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6410" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":106,"skipped":1782,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Networking
... skipping 39 lines ...
Oct 23 10:45:41.720: INFO: reached 10.64.2.64 after 0/1 tries
Oct 23 10:45:41.720: INFO: Going to retry 0 out of 3 pods....
[AfterEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:45:41.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7858" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":306,"completed":107,"skipped":1801,"failed":0}
SSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Probing container
... skipping 7 lines ...
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:46:42.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5381" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":306,"completed":108,"skipped":1806,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 44 lines ...
Oct 23 10:47:26.211: INFO: Deleting pod "simpletest.rc-x6qsj" in namespace "gc-6531"
Oct 23 10:47:26.261: INFO: Deleting pod "simpletest.rc-zwpf6" in namespace "gc-6531"
[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:47:26.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6531" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":306,"completed":109,"skipped":1843,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] 
  evicts pods with minTolerationSeconds [Disruptive] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
... skipping 13 lines ...
Oct 23 10:48:27.217: INFO: Pod1 is running on bootstrap-e2e-minion-group-0324. Tainting Node
Oct 23 10:48:29.692: INFO: Pod2 is running on bootstrap-e2e-minion-group-0324. Tainting Node
STEP: Trying to apply a taint on the Node
STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute
STEP: Waiting for Pod1 and Pod2 to be deleted
Oct 23 10:48:41.608: INFO: Noticed Pod "taint-eviction-b1" gets evicted.
Oct 23 10:49:44.860: FAIL: Failed to evict all Pods. 1 pod(s) is not evicted.

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000d4c480)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345
k8s.io/kubernetes/test/e2e.TestE2E(0xc000d4c480)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
... skipping 21 lines ...
Oct 23 10:49:45.286: INFO: POD                NODE                             PHASE    GRACE  CONDITIONS
Oct 23 10:49:45.286: INFO: taint-eviction-b2  bootstrap-e2e-minion-group-0324  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-23 10:48:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-23 10:48:55 +0000 UTC ContainersNotReady containers with unready status: [pause]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-23 10:48:55 +0000 UTC ContainersNotReady containers with unready status: [pause]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-23 10:48:27 +0000 UTC  }]
Oct 23 10:49:45.286: INFO: 
Oct 23 10:49:45.286: INFO: taint-eviction-b2[taint-multiple-pods-2484].container[pause]=The container could not be located when the pod was deleted.  The container used to be Running
Oct 23 10:49:45.351: INFO: 
Logging node info for node bootstrap-e2e-master
Oct 23 10:49:45.437: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master    678ba868-b76c-48e7-be37-e5a00a0e6f27 13732 0 2020-10-23 09:47:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2020-10-23 09:47:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}},"f:labels":{"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}}} {kubelet Update v1 2020-10-23 09:47:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/instance-type":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-gce-pd":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:attachable-volumes-gce-pd":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:config":{},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-sd-log/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3866816512 0} {<nil>} 3776188Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3604672512 0} {<nil>} 3520188Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-10-23 09:47:44 +0000 UTC,LastTransitionTime:2020-10-23 09:47:44 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-10-23 10:48:24 +0000 UTC,LastTransitionTime:2020-10-23 09:47:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-10-23 10:48:24 +0000 UTC,LastTransitionTime:2020-10-23 09:47:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-10-23 10:48:24 +0000 UTC,LastTransitionTime:2020-10-23 09:47:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-10-23 10:48:24 +0000 UTC,LastTransitionTime:2020-10-23 09:47:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.82.199.23,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-sd-log.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-sd-log.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2992e78d768c5f76bfaddf89661c75dc,SystemUUID:2992e78d-768c-5f76-bfad-df89661c75dc,BootID:28ac2e50-e450-4038-865c-0aee5b7e1edf,KernelVersion:5.4.49+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.1,KubeletVersion:v1.20.0-alpha.3.84+3627a282799b32,KubeProxyVersion:v1.20.0-alpha.3.84+3627a282799b32,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.20.0-alpha.3.84_3627a282799b32],SizeBytes:170832689,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.20.0-alpha.3.84_3627a282799b32],SizeBytes:161839437,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.20.0-alpha.3.84_3627a282799b32],SizeBytes:69364538,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager@sha256:c0ed56727cd78700034f2f863d774412c78681fb6535456f5e5c420f4248c5a1 k8s.gcr.io/kube-addon-manager:v9.1.1],SizeBytes:30515541,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-glbc-amd64@sha256:1a859d138b4874642e9a8709e7ab04324669c77742349b5b21b1ef8a25fef55f k8s.gcr.io/ingress-gce-glbc-amd64:v1.6.1],SizeBytes:26526716,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:9515805,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Oct 23 10:49:45.438: INFO: 
Logging kubelet events for node bootstrap-e2e-master
Oct 23 10:49:45.484: INFO: 
Logging pods the kubelet thinks is on node bootstrap-e2e-master
Oct 23 10:49:45.540: INFO: kube-controller-manager-bootstrap-e2e-master started at 2020-10-23 09:46:09 +0000 UTC (0+1 container statuses recorded)
Oct 23 10:49:45.540: INFO: 	Container kube-controller-manager ready: true, restart count 0
... skipping 14 lines ...
Oct 23 10:49:45.540: INFO: 	Container kube-apiserver ready: true, restart count 1
W1023 10:49:45.597586  144144 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 23 10:49:46.018: INFO: 
Latency metrics for node bootstrap-e2e-master
Oct 23 10:49:46.018: INFO: 
Logging node info for node bootstrap-e2e-minion-group-0324
Oct 23 10:49:46.058: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-0324    fe291144-66a8-40b7-8ffc-2b8dcde2ab4d 13931 0 2020-10-23 09:47:41 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-0324 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{node-problem-detector Update v1 2020-10-23 09:47:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2020-10-23 09:47:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}},"f:labels":{"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}}} {kubelet Update v1 2020-10-23 10:12:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/instance-type":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-gce-pd":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:attachable-volumes-gce-pd":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:config":{},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2020-10-23 10:48:29 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-sd-log/us-west1-b/bootstrap-e2e-minion-group-0324,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7823925248 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {<nil>} 3 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7561781248 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {<nil>} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-10-23 10:47:51 +0000 UTC,LastTransitionTime:2020-10-23 09:47:43 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-10-23 10:47:51 +0000 UTC,LastTransitionTime:2020-10-23 09:47:43 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-10-23 10:47:51 +0000 UTC,LastTransitionTime:2020-10-23 09:47:43 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-10-23 10:47:51 +0000 UTC,LastTransitionTime:2020-10-23 09:47:43 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-10-23 10:47:51 +0000 UTC,LastTransitionTime:2020-10-23 09:47:43 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-10-23 10:47:51 +0000 UTC,LastTransitionTime:2020-10-23 09:47:43 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-10-23 10:47:51 +0000 UTC,LastTransitionTime:2020-10-23 09:47:43 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-10-23 09:47:56 +0000 UTC,LastTransitionTime:2020-10-23 09:47:56 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-10-23 10:47:17 +0000 UTC,LastTransitionTime:2020-10-23 09:47:41 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-10-23 10:47:17 +0000 UTC,LastTransitionTime:2020-10-23 09:47:41 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-10-23 10:47:17 +0000 UTC,LastTransitionTime:2020-10-23 09:47:41 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-10-23 10:47:17 +0000 UTC,LastTransitionTime:2020-10-23 09:47:51 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.83.52.229,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-0324.c.k8s-jkns-gci-gce-sd-log.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-0324.c.k8s-jkns-gci-gce-sd-log.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a9739284d77e61ee2094e264e26a6df6,SystemUUID:a9739284-d77e-61ee-2094-e264e26a6df6,BootID:c4e0e60c-4952-4c81-a7b7-781bbaf40987,KernelVersion:5.4.49+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.1,KubeletVersion:v1.20.0-alpha.3.84+3627a282799b32,KubeProxyVersion:v1.20.0-alpha.3.84+3627a282799b32,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.20.0-alpha.3.84_3627a282799b32],SizeBytes:139946865,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[docker.io/library/nginx@sha256:ed7f815851b5299f616220a63edac69a4cc200e7f536a56e421988da82e44ed8 docker.io/library/nginx:latest],SizeBytes:53593938,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a k8s.gcr.io/e2e-test-images/agnhost:2.21],SizeBytes:46251468,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:13982350,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b k8s.gcr.io/metrics-server-amd64:v0.3.6],SizeBytes:10542830,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:9515805,},ContainerImage{Names:[k8s.gcr.io/addon-resizer@sha256:35745de3c9a2884d53ad0e81b39f1eed9a7c77f5f909b9e84f9712b37ffb3021 k8s.gcr.io/addon-resizer:1.8.11],SizeBytes:9347950,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Oct 23 10:49:46.059: INFO: 
Logging kubelet events for node bootstrap-e2e-minion-group-0324
Oct 23 10:49:46.097: INFO: 
Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-0324
Oct 23 10:49:46.175: INFO: kube-proxy-bootstrap-e2e-minion-group-0324 started at 2020-10-23 09:47:41 +0000 UTC (0+1 container statuses recorded)
Oct 23 10:49:46.175: INFO: 	Container kube-proxy ready: true, restart count 0
... skipping 4 lines ...
Oct 23 10:49:46.175: INFO: 	Container pause ready: false, restart count 0
W1023 10:49:46.232836  144144 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 23 10:49:46.358: INFO: 
Latency metrics for node bootstrap-e2e-minion-group-0324
Oct 23 10:49:46.358: INFO: 
Logging node info for node bootstrap-e2e-minion-group-jt1z
Oct 23 10:49:46.401: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-jt1z    d9e18d9a-2c05-4b61-ade2-88bf155e5e1a 13822 0 2020-10-23 09:47:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-jt1z kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{node-problem-detector Update v1 2020-10-23 09:47:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2020-10-23 09:47:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}},"f:labels":{"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}}} {e2e.test Update v1 2020-10-23 10:12:04 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2020-10-23 10:12:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/instance-type":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-gce-pd":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:attachable-volumes-gce-pd":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:config":{},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-sd-log/us-west1-b/bootstrap-e2e-minion-group-jt1z,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7823917056 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {<nil>} 3 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7561773056 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {<nil>} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-10-23 10:47:50 +0000 UTC,LastTransitionTime:2020-10-23 09:47:44 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-10-23 10:47:50 +0000 UTC,LastTransitionTime:2020-10-23 09:47:44 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-10-23 10:47:50 +0000 UTC,LastTransitionTime:2020-10-23 09:47:44 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-10-23 10:47:50 +0000 UTC,LastTransitionTime:2020-10-23 09:47:44 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-10-23 10:47:50 +0000 UTC,LastTransitionTime:2020-10-23 09:47:44 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-10-23 10:47:50 +0000 UTC,LastTransitionTime:2020-10-23 09:47:44 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-10-23 10:47:50 +0000 UTC,LastTransitionTime:2020-10-23 09:47:44 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-10-23 09:47:56 +0000 UTC,LastTransitionTime:2020-10-23 09:47:56 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-10-23 10:48:50 +0000 UTC,LastTransitionTime:2020-10-23 09:47:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-10-23 10:48:50 +0000 UTC,LastTransitionTime:2020-10-23 09:47:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-10-23 10:48:50 +0000 UTC,LastTransitionTime:2020-10-23 09:47:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-10-23 10:48:50 +0000 UTC,LastTransitionTime:2020-10-23 09:47:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.203.170.212,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-jt1z.c.k8s-jkns-gci-gce-sd-log.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-jt1z.c.k8s-jkns-gci-gce-sd-log.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c2b79e230f62348b164c83be57d82e97,SystemUUID:c2b79e23-0f62-348b-164c-83be57d82e97,BootID:620bfb65-5d8d-4dfc-9406-df85da1412c2,KernelVersion:5.4.49+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.1,KubeletVersion:v1.20.0-alpha.3.84+3627a282799b32,KubeProxyVersion:v1.20.0-alpha.3.84+3627a282799b32,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.20.0-alpha.3.84_3627a282799b32],SizeBytes:139946865,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a k8s.gcr.io/e2e-test-images/agnhost:2.21],SizeBytes:46251468,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/sig-storage/snapshot-controller@sha256:36ca32433c069246ea8988a7b3dbdf0aabf8345be9122b8a25426e6c487878de k8s.gcr.io/sig-storage/snapshot-controller:v3.0.0],SizeBytes:17462937,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:be8875e5584750b7a490244ee56a121a714aa3d124164a5090cd8b3570c5650f k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.8.1],SizeBytes:15208262,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:13982350,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b k8s.gcr.io/metrics-server-amd64:v0.3.6],SizeBytes:10542830,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:9515805,},ContainerImage{Names:[k8s.gcr.io/addon-resizer@sha256:35745de3c9a2884d53ad0e81b39f1eed9a7c77f5f909b9e84f9712b37ffb3021 k8s.gcr.io/addon-resizer:1.8.11],SizeBytes:9347950,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Oct 23 10:49:46.401: INFO: 
Logging kubelet events for node bootstrap-e2e-minion-group-jt1z
Oct 23 10:49:46.454: INFO: 
Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-jt1z
Oct 23 10:49:46.535: INFO: metrics-server-v0.3.6-8b98f98c9-p6nvr started at 2020-10-23 09:48:25 +0000 UTC (0+2 container statuses recorded)
Oct 23 10:49:46.535: INFO: 	Container metrics-server ready: true, restart count 0
... skipping 11 lines ...
Oct 23 10:49:46.535: INFO: 	Container coredns ready: true, restart count 0
W1023 10:49:46.704208  144144 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 23 10:49:46.869: INFO: 
Latency metrics for node bootstrap-e2e-minion-group-jt1z
Oct 23 10:49:46.869: INFO: 
Logging node info for node bootstrap-e2e-minion-group-xbjm
Oct 23 10:49:46.907: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-xbjm    a1130550-f2c1-4eb2-92d2-e8c89756cc89 13666 0 2020-10-23 09:47:43 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-xbjm kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{node-problem-detector Update v1 2020-10-23 09:47:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2020-10-23 09:47:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}},"f:labels":{"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}}} {e2e.test Update v1 2020-10-23 10:12:04 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2020-10-23 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/instance-type":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-gce-pd":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:attachable-volumes-gce-pd":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:config":{},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-sd-log/us-west1-b/bootstrap-e2e-minion-group-xbjm,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7823925248 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {<nil>} 3 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7561781248 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {<nil>} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-10-23 10:47:51 +0000 UTC,LastTransitionTime:2020-10-23 09:47:43 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-10-23 10:47:51 +0000 UTC,LastTransitionTime:2020-10-23 09:47:43 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-10-23 10:47:51 +0000 UTC,LastTransitionTime:2020-10-23 09:47:43 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-10-23 10:47:51 +0000 UTC,LastTransitionTime:2020-10-23 09:47:43 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-10-23 10:47:51 +0000 UTC,LastTransitionTime:2020-10-23 09:47:43 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-10-23 10:47:51 +0000 UTC,LastTransitionTime:2020-10-23 09:47:43 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-10-23 10:47:51 +0000 UTC,LastTransitionTime:2020-10-23 09:47:43 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-10-23 09:47:56 +0000 UTC,LastTransitionTime:2020-10-23 09:47:56 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-10-23 10:44:48 +0000 UTC,LastTransitionTime:2020-10-23 09:47:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-10-23 10:44:48 +0000 UTC,LastTransitionTime:2020-10-23 09:47:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-10-23 10:44:48 +0000 UTC,LastTransitionTime:2020-10-23 09:47:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-10-23 10:44:48 +0000 UTC,LastTransitionTime:2020-10-23 09:47:53 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.105.110.3,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-xbjm.c.k8s-jkns-gci-gce-sd-log.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-xbjm.c.k8s-jkns-gci-gce-sd-log.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5f7ad455eb96c390651b79e508f8f4f3,SystemUUID:5f7ad455-eb96-c390-651b-79e508f8f4f3,BootID:9e0ce52d-7444-447a-b6d6-c31174e7cec3,KernelVersion:5.4.49+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.1,KubeletVersion:v1.20.0-alpha.3.84+3627a282799b32,KubeProxyVersion:v1.20.0-alpha.3.84+3627a282799b32,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.20.0-alpha.3.84_3627a282799b32],SizeBytes:139946865,},ContainerImage{Names:[docker.io/library/nginx@sha256:ed7f815851b5299f616220a63edac69a4cc200e7f536a56e421988da82e44ed8 docker.io/library/nginx:latest],SizeBytes:53593938,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a k8s.gcr.io/e2e-test-images/agnhost:2.21],SizeBytes:46251468,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:13982350,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b k8s.gcr.io/metrics-server-amd64:v0.3.6],SizeBytes:10542830,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:9515805,},ContainerImage{Names:[k8s.gcr.io/addon-resizer@sha256:35745de3c9a2884d53ad0e81b39f1eed9a7c77f5f909b9e84f9712b37ffb3021 k8s.gcr.io/addon-resizer:1.8.11],SizeBytes:9347950,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64@sha256:d83d8a481145d0eb71f8bd71ae236d1c6a931dd3bdcaf80919a8ec4a4d8aff74 k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0],SizeBytes:6362391,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Oct 23 10:49:46.907: INFO: 
Logging kubelet events for node bootstrap-e2e-minion-group-xbjm
Oct 23 10:49:46.947: INFO: 
Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-xbjm
Oct 23 10:49:47.020: INFO: kube-proxy-bootstrap-e2e-minion-group-xbjm started at 2020-10-23 09:47:42 +0000 UTC (0+1 container statuses recorded)
Oct 23 10:49:47.020: INFO: 	Container kube-proxy ready: true, restart count 0
... skipping 13 lines ...
• Failure [140.874 seconds]
[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  evicts pods with minTolerationSeconds [Disruptive] [Conformance] [It]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629

  Oct 23 10:49:44.860: Failed to evict all Pods. 1 pod(s) is not evicted.

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":306,"completed":109,"skipped":1890,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Oct 23 10:49:47.565: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:49:54.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7689" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":306,"completed":110,"skipped":1896,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 16 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:50:08.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-253" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":306,"completed":111,"skipped":1899,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 171 lines ...
Oct 23 10:50:54.411: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"14192"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:50:54.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7790" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":306,"completed":112,"skipped":1948,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Docker Containers
... skipping 5 lines ...
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [k8s.io] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:50:57.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6193" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":306,"completed":113,"skipped":1951,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] ReplicationController
... skipping 13 lines ...
Oct 23 10:50:58.788: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:50:58.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8380" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":306,"completed":114,"skipped":1967,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 23 10:50:59.101: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:51:02.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2841" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":306,"completed":115,"skipped":1976,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 23 10:51:03.019: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap that has name configmap-test-emptyKey-f7b3a3c6-5422-4d49-9371-d7a6303037be
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:51:03.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8054" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":306,"completed":116,"skipped":1980,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 23 10:51:03.356: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129
[It] should retry creating failed daemon pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Oct 23 10:51:04.588: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Oct 23 10:51:04.722: INFO: Number of nodes with available pods: 0
Oct 23 10:51:04.722: INFO: Node bootstrap-e2e-minion-group-0324 is running more than one daemon pod
... skipping 6 lines ...
Oct 23 10:51:07.851: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Oct 23 10:51:07.949: INFO: Number of nodes with available pods: 1
Oct 23 10:51:07.949: INFO: Node bootstrap-e2e-minion-group-0324 is running more than one daemon pod
Oct 23 10:51:08.868: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Oct 23 10:51:08.924: INFO: Number of nodes with available pods: 3
Oct 23 10:51:08.924: INFO: Number of running nodes: 3, number of available pods: 3
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Oct 23 10:51:09.324: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Oct 23 10:51:09.377: INFO: Number of nodes with available pods: 2
Oct 23 10:51:09.377: INFO: Node bootstrap-e2e-minion-group-0324 is running more than one daemon pod
Oct 23 10:51:10.429: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Oct 23 10:51:10.468: INFO: Number of nodes with available pods: 2
Oct 23 10:51:10.468: INFO: Node bootstrap-e2e-minion-group-0324 is running more than one daemon pod
Oct 23 10:51:11.441: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Oct 23 10:51:11.491: INFO: Number of nodes with available pods: 3
Oct 23 10:51:11.491: INFO: Number of running nodes: 3, number of available pods: 3
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9684, will wait for the garbage collector to delete the pods
Oct 23 10:51:11.753: INFO: Deleting DaemonSet.extensions daemon-set took: 61.492631ms
Oct 23 10:51:11.954: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.284352ms
... skipping 4 lines ...
Oct 23 10:51:24.170: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"14388"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:51:24.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9684" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":306,"completed":117,"skipped":2005,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: creating secret secrets-820/secret-test-9133f1d6-94c9-44d6-a0eb-fe2bad3db50e
STEP: Creating a pod to test consume secrets
Oct 23 10:51:24.732: INFO: Waiting up to 5m0s for pod "pod-configmaps-83762408-5197-43fb-aa39-d514afda5eda" in namespace "secrets-820" to be "Succeeded or Failed"
Oct 23 10:51:24.856: INFO: Pod "pod-configmaps-83762408-5197-43fb-aa39-d514afda5eda": Phase="Pending", Reason="", readiness=false. Elapsed: 124.085916ms
Oct 23 10:51:26.938: INFO: Pod "pod-configmaps-83762408-5197-43fb-aa39-d514afda5eda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.206112443s
STEP: Saw pod success
Oct 23 10:51:26.938: INFO: Pod "pod-configmaps-83762408-5197-43fb-aa39-d514afda5eda" satisfied condition "Succeeded or Failed"
Oct 23 10:51:26.998: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-configmaps-83762408-5197-43fb-aa39-d514afda5eda container env-test: <nil>
STEP: delete the pod
Oct 23 10:51:27.161: INFO: Waiting for pod pod-configmaps-83762408-5197-43fb-aa39-d514afda5eda to disappear
Oct 23 10:51:27.208: INFO: Pod pod-configmaps-83762408-5197-43fb-aa39-d514afda5eda no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:51:27.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-820" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":306,"completed":118,"skipped":2009,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}

------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 14 lines ...
Oct 23 10:51:32.398: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:51:45.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1645" for this suite.
STEP: Destroying namespace "webhook-1645-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":306,"completed":119,"skipped":2009,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:51:50.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9054" for this suite.
STEP: Destroying namespace "webhook-9054-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":306,"completed":120,"skipped":2061,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Oct 23 10:51:53.680: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:51:53.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8756" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":306,"completed":121,"skipped":2090,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 23 10:51:54.081: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-879143a2-8ac6-4539-b0c5-5a0b6c87eb1a" in namespace "security-context-test-4999" to be "Succeeded or Failed"
Oct 23 10:51:54.117: INFO: Pod "busybox-readonly-false-879143a2-8ac6-4539-b0c5-5a0b6c87eb1a": Phase="Pending", Reason="", readiness=false. Elapsed: 36.597984ms
Oct 23 10:51:56.157: INFO: Pod "busybox-readonly-false-879143a2-8ac6-4539-b0c5-5a0b6c87eb1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.075822953s
Oct 23 10:51:56.157: INFO: Pod "busybox-readonly-false-879143a2-8ac6-4539-b0c5-5a0b6c87eb1a" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:51:56.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4999" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":306,"completed":122,"skipped":2111,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-map-138d5e0b-11a9-4a49-88bf-05a2cc866b28
STEP: Creating a pod to test consume secrets
Oct 23 10:51:56.571: INFO: Waiting up to 5m0s for pod "pod-secrets-7ba4c05d-5af1-4995-acd9-54d5435b9583" in namespace "secrets-2172" to be "Succeeded or Failed"
Oct 23 10:51:56.608: INFO: Pod "pod-secrets-7ba4c05d-5af1-4995-acd9-54d5435b9583": Phase="Pending", Reason="", readiness=false. Elapsed: 37.054301ms
Oct 23 10:51:58.647: INFO: Pod "pod-secrets-7ba4c05d-5af1-4995-acd9-54d5435b9583": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.075329871s
STEP: Saw pod success
Oct 23 10:51:58.647: INFO: Pod "pod-secrets-7ba4c05d-5af1-4995-acd9-54d5435b9583" satisfied condition "Succeeded or Failed"
Oct 23 10:51:58.685: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-secrets-7ba4c05d-5af1-4995-acd9-54d5435b9583 container secret-volume-test: <nil>
STEP: delete the pod
Oct 23 10:51:58.774: INFO: Waiting for pod pod-secrets-7ba4c05d-5af1-4995-acd9-54d5435b9583 to disappear
Oct 23 10:51:58.811: INFO: Pod pod-secrets-7ba4c05d-5af1-4995-acd9-54d5435b9583 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:51:58.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2172" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":123,"skipped":2126,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Pods
... skipping 14 lines ...
STEP: verifying the updated pod is in kubernetes
Oct 23 10:52:01.975: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:52:01.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-49" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":306,"completed":124,"skipped":2201,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 23 10:52:02.561: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-8043f3f5-fff3-4a75-9845-2d23a4eacf62" in namespace "security-context-test-6586" to be "Succeeded or Failed"
Oct 23 10:52:02.598: INFO: Pod "alpine-nnp-false-8043f3f5-fff3-4a75-9845-2d23a4eacf62": Phase="Pending", Reason="", readiness=false. Elapsed: 36.914498ms
Oct 23 10:52:04.637: INFO: Pod "alpine-nnp-false-8043f3f5-fff3-4a75-9845-2d23a4eacf62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076178268s
Oct 23 10:52:06.675: INFO: Pod "alpine-nnp-false-8043f3f5-fff3-4a75-9845-2d23a4eacf62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.114222324s
Oct 23 10:52:06.675: INFO: Pod "alpine-nnp-false-8043f3f5-fff3-4a75-9845-2d23a4eacf62" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:52:06.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6586" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":125,"skipped":2203,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-fd0498dd-72e3-4526-b0ff-3ae66f192aac
STEP: Creating a pod to test consume secrets
Oct 23 10:52:07.172: INFO: Waiting up to 5m0s for pod "pod-secrets-80970492-c10f-43be-8e63-e875c9f9d45d" in namespace "secrets-2715" to be "Succeeded or Failed"
Oct 23 10:52:07.223: INFO: Pod "pod-secrets-80970492-c10f-43be-8e63-e875c9f9d45d": Phase="Pending", Reason="", readiness=false. Elapsed: 50.050622ms
Oct 23 10:52:09.261: INFO: Pod "pod-secrets-80970492-c10f-43be-8e63-e875c9f9d45d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088081066s
Oct 23 10:52:11.299: INFO: Pod "pod-secrets-80970492-c10f-43be-8e63-e875c9f9d45d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.126734626s
STEP: Saw pod success
Oct 23 10:52:11.299: INFO: Pod "pod-secrets-80970492-c10f-43be-8e63-e875c9f9d45d" satisfied condition "Succeeded or Failed"
Oct 23 10:52:11.338: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-secrets-80970492-c10f-43be-8e63-e875c9f9d45d container secret-volume-test: <nil>
STEP: delete the pod
Oct 23 10:52:11.449: INFO: Waiting for pod pod-secrets-80970492-c10f-43be-8e63-e875c9f9d45d to disappear
Oct 23 10:52:11.486: INFO: Pod pod-secrets-80970492-c10f-43be-8e63-e875c9f9d45d no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:52:11.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2715" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":126,"skipped":2227,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}

------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 23 10:52:11.751: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:52:12.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7181" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":306,"completed":127,"skipped":2227,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
Oct 23 10:52:18.457: INFO: Successfully updated pod "labelsupdateb6235b2a-73ed-47ed-92c8-86b9c3094eb8"
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:52:20.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8466" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":306,"completed":128,"skipped":2234,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-4d14fe2d-b767-46b5-8d92-08c47a29b15b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:52:27.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1347" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":306,"completed":129,"skipped":2250,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] [sig-node] PreStop
... skipping 26 lines ...
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:52:39.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-3023" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":306,"completed":130,"skipped":2253,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap configmap-7141/configmap-test-66a9e9a8-752b-4acf-b802-0f231950b398
STEP: Creating a pod to test consume configMaps
Oct 23 10:52:40.099: INFO: Waiting up to 5m0s for pod "pod-configmaps-de0eb3e7-690a-4d50-87e6-022cf475ab99" in namespace "configmap-7141" to be "Succeeded or Failed"
Oct 23 10:52:40.159: INFO: Pod "pod-configmaps-de0eb3e7-690a-4d50-87e6-022cf475ab99": Phase="Pending", Reason="", readiness=false. Elapsed: 60.83279ms
Oct 23 10:52:42.205: INFO: Pod "pod-configmaps-de0eb3e7-690a-4d50-87e6-022cf475ab99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.10640307s
STEP: Saw pod success
Oct 23 10:52:42.205: INFO: Pod "pod-configmaps-de0eb3e7-690a-4d50-87e6-022cf475ab99" satisfied condition "Succeeded or Failed"
Oct 23 10:52:42.272: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-configmaps-de0eb3e7-690a-4d50-87e6-022cf475ab99 container env-test: <nil>
STEP: delete the pod
Oct 23 10:52:42.518: INFO: Waiting for pod pod-configmaps-de0eb3e7-690a-4d50-87e6-022cf475ab99 to disappear
Oct 23 10:52:42.644: INFO: Pod pod-configmaps-de0eb3e7-690a-4d50-87e6-022cf475ab99 no longer exists
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:52:42.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7141" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":306,"completed":131,"skipped":2329,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] PodTemplates 
  should run the lifecycle of PodTemplates [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] PodTemplates
... skipping 5 lines ...
[It] should run the lifecycle of PodTemplates [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [sig-node] PodTemplates
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:52:43.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-9522" for this suite.
•{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":306,"completed":132,"skipped":2347,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-252b63c5-2dc4-42a1-97cf-7cfbe3c22868
STEP: Creating a pod to test consume configMaps
Oct 23 10:52:44.216: INFO: Waiting up to 5m0s for pod "pod-configmaps-b0068cdf-8e13-441a-8390-14de1683147f" in namespace "configmap-2603" to be "Succeeded or Failed"
Oct 23 10:52:44.266: INFO: Pod "pod-configmaps-b0068cdf-8e13-441a-8390-14de1683147f": Phase="Pending", Reason="", readiness=false. Elapsed: 50.754283ms
Oct 23 10:52:46.306: INFO: Pod "pod-configmaps-b0068cdf-8e13-441a-8390-14de1683147f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.090291636s
STEP: Saw pod success
Oct 23 10:52:46.306: INFO: Pod "pod-configmaps-b0068cdf-8e13-441a-8390-14de1683147f" satisfied condition "Succeeded or Failed"
Oct 23 10:52:46.347: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-configmaps-b0068cdf-8e13-441a-8390-14de1683147f container configmap-volume-test: <nil>
STEP: delete the pod
Oct 23 10:52:46.460: INFO: Waiting for pod pod-configmaps-b0068cdf-8e13-441a-8390-14de1683147f to disappear
Oct 23 10:52:46.500: INFO: Pod pod-configmaps-b0068cdf-8e13-441a-8390-14de1683147f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:52:46.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2603" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":306,"completed":133,"skipped":2358,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 23 10:52:46.582: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0777 on node default medium
Oct 23 10:52:46.830: INFO: Waiting up to 5m0s for pod "pod-9f3eac31-090f-442d-b4a3-ef47eaa0dc7a" in namespace "emptydir-9420" to be "Succeeded or Failed"
Oct 23 10:52:46.883: INFO: Pod "pod-9f3eac31-090f-442d-b4a3-ef47eaa0dc7a": Phase="Pending", Reason="", readiness=false. Elapsed: 52.635584ms
Oct 23 10:52:48.944: INFO: Pod "pod-9f3eac31-090f-442d-b4a3-ef47eaa0dc7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.113886447s
STEP: Saw pod success
Oct 23 10:52:48.944: INFO: Pod "pod-9f3eac31-090f-442d-b4a3-ef47eaa0dc7a" satisfied condition "Succeeded or Failed"
Oct 23 10:52:49.049: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-9f3eac31-090f-442d-b4a3-ef47eaa0dc7a container test-container: <nil>
STEP: delete the pod
Oct 23 10:52:49.279: INFO: Waiting for pod pod-9f3eac31-090f-442d-b4a3-ef47eaa0dc7a to disappear
Oct 23 10:52:49.359: INFO: Pod pod-9f3eac31-090f-442d-b4a3-ef47eaa0dc7a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:52:49.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9420" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":134,"skipped":2370,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 23 10:52:50.346: INFO: Waiting up to 5m0s for pod "downwardapi-volume-db92ad0b-7617-4edb-a482-ab64d11b9683" in namespace "downward-api-678" to be "Succeeded or Failed"
Oct 23 10:52:50.412: INFO: Pod "downwardapi-volume-db92ad0b-7617-4edb-a482-ab64d11b9683": Phase="Pending", Reason="", readiness=false. Elapsed: 65.794584ms
Oct 23 10:52:52.451: INFO: Pod "downwardapi-volume-db92ad0b-7617-4edb-a482-ab64d11b9683": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.105192103s
STEP: Saw pod success
Oct 23 10:52:52.451: INFO: Pod "downwardapi-volume-db92ad0b-7617-4edb-a482-ab64d11b9683" satisfied condition "Succeeded or Failed"
Oct 23 10:52:52.491: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod downwardapi-volume-db92ad0b-7617-4edb-a482-ab64d11b9683 container client-container: <nil>
STEP: delete the pod
Oct 23 10:52:52.585: INFO: Waiting for pod downwardapi-volume-db92ad0b-7617-4edb-a482-ab64d11b9683 to disappear
Oct 23 10:52:52.623: INFO: Pod downwardapi-volume-db92ad0b-7617-4edb-a482-ab64d11b9683 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:52:52.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-678" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":306,"completed":135,"skipped":2376,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
S
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:52:58.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1504" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":306,"completed":136,"skipped":2377,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Watchers
... skipping 7 lines ...
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:53:05.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9788" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":306,"completed":137,"skipped":2379,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:53:10.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-805" for this suite.
STEP: Destroying namespace "webhook-805-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":306,"completed":138,"skipped":2401,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [k8s.io] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:53:15.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3732" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":139,"skipped":2419,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
Oct 23 10:53:18.802: INFO: Successfully updated pod "annotationupdate22c29984-aab4-4a11-9911-147e3d725281"
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:53:23.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2717" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":306,"completed":140,"skipped":2424,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
S
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 12 lines ...
STEP: reading a file in the container
Oct 23 10:53:27.638: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl exec --namespace=svcaccounts-8827 pod-service-account-b60491d1-f09a-407c-bb13-6d09dd10c745 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:53:28.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8827" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":306,"completed":141,"skipped":2425,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Oct 23 10:53:28.354: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward api env vars
Oct 23 10:53:28.586: INFO: Waiting up to 5m0s for pod "downward-api-1dae2298-49ef-4b70-884e-8f32fcacb508" in namespace "downward-api-1829" to be "Succeeded or Failed"
Oct 23 10:53:28.624: INFO: Pod "downward-api-1dae2298-49ef-4b70-884e-8f32fcacb508": Phase="Pending", Reason="", readiness=false. Elapsed: 37.949187ms
Oct 23 10:53:30.664: INFO: Pod "downward-api-1dae2298-49ef-4b70-884e-8f32fcacb508": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078060746s
Oct 23 10:53:32.707: INFO: Pod "downward-api-1dae2298-49ef-4b70-884e-8f32fcacb508": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.121032007s
STEP: Saw pod success
Oct 23 10:53:32.707: INFO: Pod "downward-api-1dae2298-49ef-4b70-884e-8f32fcacb508" satisfied condition "Succeeded or Failed"
Oct 23 10:53:32.756: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod downward-api-1dae2298-49ef-4b70-884e-8f32fcacb508 container dapi-container: <nil>
STEP: delete the pod
Oct 23 10:53:33.197: INFO: Waiting for pod downward-api-1dae2298-49ef-4b70-884e-8f32fcacb508 to disappear
Oct 23 10:53:33.249: INFO: Pod downward-api-1dae2298-49ef-4b70-884e-8f32fcacb508 no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:53:33.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1829" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":306,"completed":142,"skipped":2430,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:53:40.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2355" for this suite.
STEP: Destroying namespace "webhook-2355-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":306,"completed":143,"skipped":2503,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
... skipping 7 lines ...
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:53:41.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-3791" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":306,"completed":144,"skipped":2530,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 23 10:53:41.535: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0666 on tmpfs
Oct 23 10:53:41.767: INFO: Waiting up to 5m0s for pod "pod-bb90799c-a139-4c47-94ec-da285d3bd721" in namespace "emptydir-5123" to be "Succeeded or Failed"
Oct 23 10:53:41.806: INFO: Pod "pod-bb90799c-a139-4c47-94ec-da285d3bd721": Phase="Pending", Reason="", readiness=false. Elapsed: 38.776949ms
Oct 23 10:53:43.845: INFO: Pod "pod-bb90799c-a139-4c47-94ec-da285d3bd721": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.07814206s
STEP: Saw pod success
Oct 23 10:53:43.845: INFO: Pod "pod-bb90799c-a139-4c47-94ec-da285d3bd721" satisfied condition "Succeeded or Failed"
Oct 23 10:53:43.884: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-bb90799c-a139-4c47-94ec-da285d3bd721 container test-container: <nil>
STEP: delete the pod
Oct 23 10:53:43.975: INFO: Waiting for pod pod-bb90799c-a139-4c47-94ec-da285d3bd721 to disappear
Oct 23 10:53:44.013: INFO: Pod pod-bb90799c-a139-4c47-94ec-da285d3bd721 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:53:44.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5123" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":145,"skipped":2561,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}

------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Secrets
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-dc3b2945-1023-4651-bb37-13fbe67129f6
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:55:17.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8546" for this suite.
•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":306,"completed":146,"skipped":2561,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] 
  removing taint cancels eviction [Disruptive] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
... skipping 20 lines ...
STEP: Waiting some time to make sure that toleration time passed.
Oct 23 10:57:33.793: INFO: Pod wasn't evicted. Test successful
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:57:33.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-single-pod-6982" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":306,"completed":147,"skipped":2572,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 56 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:57:43.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1670" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":306,"completed":148,"skipped":2584,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] ReplicationController
... skipping 10 lines ...
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:57:47.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6607" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":306,"completed":149,"skipped":2593,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}

------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Oct 23 10:57:47.264: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test override all
Oct 23 10:57:47.501: INFO: Waiting up to 5m0s for pod "client-containers-a9a384d5-4ad3-4032-a37d-dacced94933f" in namespace "containers-9032" to be "Succeeded or Failed"
Oct 23 10:57:47.541: INFO: Pod "client-containers-a9a384d5-4ad3-4032-a37d-dacced94933f": Phase="Pending", Reason="", readiness=false. Elapsed: 39.653935ms
Oct 23 10:57:49.581: INFO: Pod "client-containers-a9a384d5-4ad3-4032-a37d-dacced94933f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.079452206s
STEP: Saw pod success
Oct 23 10:57:49.581: INFO: Pod "client-containers-a9a384d5-4ad3-4032-a37d-dacced94933f" satisfied condition "Succeeded or Failed"
Oct 23 10:57:49.618: INFO: Trying to get logs from node bootstrap-e2e-minion-group-xbjm pod client-containers-a9a384d5-4ad3-4032-a37d-dacced94933f container agnhost-container: <nil>
STEP: delete the pod
Oct 23 10:57:49.752: INFO: Waiting for pod client-containers-a9a384d5-4ad3-4032-a37d-dacced94933f to disappear
Oct 23 10:57:49.789: INFO: Pod client-containers-a9a384d5-4ad3-4032-a37d-dacced94933f no longer exists
[AfterEach] [k8s.io] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:57:49.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9032" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":306,"completed":150,"skipped":2593,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 62 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:57:55.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9667" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":306,"completed":151,"skipped":2626,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 8 lines ...
Oct 23 10:57:55.739: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"a1ee97fd-b8bf-46c3-826f-103734a526dc", Controller:(*bool)(0xc001cc9696), BlockOwnerDeletion:(*bool)(0xc001cc9697)}}
Oct 23 10:57:55.780: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"a348765d-69c9-46fc-b2f7-ab9850c55bc6", Controller:(*bool)(0xc001cc9b96), BlockOwnerDeletion:(*bool)(0xc001cc9b97)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:58:00.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8791" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":306,"completed":152,"skipped":2639,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should test the lifecycle of an Endpoint [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 19 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:58:02.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5688" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":306,"completed":153,"skipped":2701,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:58:05.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4317" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":306,"completed":154,"skipped":2713,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 23 10:58:05.429: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af0a8dc5-8c5a-455b-8626-a2278035b86b" in namespace "downward-api-3532" to be "Succeeded or Failed"
Oct 23 10:58:05.475: INFO: Pod "downwardapi-volume-af0a8dc5-8c5a-455b-8626-a2278035b86b": Phase="Pending", Reason="", readiness=false. Elapsed: 45.75407ms
Oct 23 10:58:07.516: INFO: Pod "downwardapi-volume-af0a8dc5-8c5a-455b-8626-a2278035b86b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.087461935s
STEP: Saw pod success
Oct 23 10:58:07.516: INFO: Pod "downwardapi-volume-af0a8dc5-8c5a-455b-8626-a2278035b86b" satisfied condition "Succeeded or Failed"
Oct 23 10:58:07.554: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod downwardapi-volume-af0a8dc5-8c5a-455b-8626-a2278035b86b container client-container: <nil>
STEP: delete the pod
Oct 23 10:58:07.652: INFO: Waiting for pod downwardapi-volume-af0a8dc5-8c5a-455b-8626-a2278035b86b to disappear
Oct 23 10:58:07.689: INFO: Pod downwardapi-volume-af0a8dc5-8c5a-455b-8626-a2278035b86b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:58:07.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3532" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":306,"completed":155,"skipped":2720,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
Oct 23 10:58:10.902: INFO: Successfully updated pod "annotationupdatef03a7b15-d706-43e7-a377-75cef8e7868f"
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:58:15.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7613" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":306,"completed":156,"skipped":2758,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 23 10:58:15.155: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0666 on node default medium
Oct 23 10:58:15.497: INFO: Waiting up to 5m0s for pod "pod-8a8724b7-9540-4a4e-a293-adeb9767a94c" in namespace "emptydir-6482" to be "Succeeded or Failed"
Oct 23 10:58:15.555: INFO: Pod "pod-8a8724b7-9540-4a4e-a293-adeb9767a94c": Phase="Pending", Reason="", readiness=false. Elapsed: 57.387079ms
Oct 23 10:58:17.594: INFO: Pod "pod-8a8724b7-9540-4a4e-a293-adeb9767a94c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.096355742s
STEP: Saw pod success
Oct 23 10:58:17.594: INFO: Pod "pod-8a8724b7-9540-4a4e-a293-adeb9767a94c" satisfied condition "Succeeded or Failed"
Oct 23 10:58:17.633: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-8a8724b7-9540-4a4e-a293-adeb9767a94c container test-container: <nil>
STEP: delete the pod
Oct 23 10:58:17.722: INFO: Waiting for pod pod-8a8724b7-9540-4a4e-a293-adeb9767a94c to disappear
Oct 23 10:58:17.760: INFO: Pod pod-8a8724b7-9540-4a4e-a293-adeb9767a94c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:58:17.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6482" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":157,"skipped":2762,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Oct 23 10:58:18.032: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 23 10:58:23.483: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:58:40.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1507" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":306,"completed":158,"skipped":2763,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] LimitRange
... skipping 31 lines ...
Oct 23 10:58:49.003: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:58:49.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-4257" for this suite.
•{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":306,"completed":159,"skipped":2769,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] Events 
  should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Events
... skipping 11 lines ...
STEP: deleting the test event
STEP: listing all events in all namespaces
[AfterEach] [sig-api-machinery] Events
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:58:49.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-5405" for this suite.
•{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":306,"completed":160,"skipped":2772,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Lease
... skipping 5 lines ...
[It] lease API should be available [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [k8s.io] Lease
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:58:50.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-2701" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":306,"completed":161,"skipped":2788,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 8 lines ...
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:58:58.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8026" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":306,"completed":162,"skipped":2798,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 23 10:58:58.178: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct 23 10:58:58.408: INFO: Waiting up to 5m0s for pod "pod-733bfc70-f3b2-4a95-9096-10b1fbabcd7f" in namespace "emptydir-70" to be "Succeeded or Failed"
Oct 23 10:58:58.450: INFO: Pod "pod-733bfc70-f3b2-4a95-9096-10b1fbabcd7f": Phase="Pending", Reason="", readiness=false. Elapsed: 42.370191ms
Oct 23 10:59:00.488: INFO: Pod "pod-733bfc70-f3b2-4a95-9096-10b1fbabcd7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.080361777s
STEP: Saw pod success
Oct 23 10:59:00.488: INFO: Pod "pod-733bfc70-f3b2-4a95-9096-10b1fbabcd7f" satisfied condition "Succeeded or Failed"
Oct 23 10:59:00.527: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-733bfc70-f3b2-4a95-9096-10b1fbabcd7f container test-container: <nil>
STEP: delete the pod
Oct 23 10:59:00.623: INFO: Waiting for pod pod-733bfc70-f3b2-4a95-9096-10b1fbabcd7f to disappear
Oct 23 10:59:00.662: INFO: Pod pod-733bfc70-f3b2-4a95-9096-10b1fbabcd7f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:59:00.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-70" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":163,"skipped":2813,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:59:06.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3958" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":306,"completed":164,"skipped":2850,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Watchers
... skipping 13 lines ...
Oct 23 10:59:07.019: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-8457  465288c0-8cef-44e3-83b9-c697de82235f 16471 0 2020-10-23 10:59:06 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-10-23 10:59:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Oct 23 10:59:07.019: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-8457  465288c0-8cef-44e3-83b9-c697de82235f 16472 0 2020-10-23 10:59:06 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-10-23 10:59:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:59:07.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8457" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":306,"completed":165,"skipped":2852,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:59:12.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8083" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":306,"completed":166,"skipped":2891,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 12 lines ...
Oct 23 10:59:15.151: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 23 10:59:15.632: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:59:15.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9214" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":306,"completed":167,"skipped":2895,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 10:59:21.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4162" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":306,"completed":168,"skipped":2915,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 23 10:59:21.918: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: creating the pod with failed condition
STEP: updating the pod
Oct 23 11:01:23.147: INFO: Successfully updated pod "var-expansion-ce33ffc0-f945-40b4-9af7-b6c5131381c7"
STEP: waiting for pod running
STEP: deleting the pod gracefully
Oct 23 11:01:25.251: INFO: Deleting pod "var-expansion-ce33ffc0-f945-40b4-9af7-b6c5131381c7" in namespace "var-expansion-7248"
Oct 23 11:01:25.326: INFO: Wait up to 5m0s for pod "var-expansion-ce33ffc0-f945-40b4-9af7-b6c5131381c7" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:02:03.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7248" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":306,"completed":169,"skipped":2942,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Oct 23 11:02:05.945: INFO: Initial restart count of pod busybox-f00b4161-17fc-4134-ac80-e4eeb3d9cc43 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:06:07.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2261" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":306,"completed":170,"skipped":2972,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
... skipping 42 lines ...
Oct 23 11:06:15.025: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 23 11:06:15.396: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:06:15.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-757" for this suite.
•{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":171,"skipped":2985,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 28 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:06:32.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4013" for this suite.
STEP: Destroying namespace "webhook-4013-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":306,"completed":172,"skipped":2997,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] DNS
... skipping 23 lines ...
Oct 23 11:06:38.673: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:38.714: INFO: Unable to read jessie_udp@dns-test-service.dns-8134 from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:38.758: INFO: Unable to read jessie_tcp@dns-test-service.dns-8134 from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:38.798: INFO: Unable to read jessie_udp@dns-test-service.dns-8134.svc from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:38.841: INFO: Unable to read jessie_tcp@dns-test-service.dns-8134.svc from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:38.922: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8134.svc from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:39.175: INFO: Lookups using dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8134 wheezy_tcp@dns-test-service.dns-8134 wheezy_udp@dns-test-service.dns-8134.svc wheezy_tcp@dns-test-service.dns-8134.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8134 jessie_tcp@dns-test-service.dns-8134 jessie_udp@dns-test-service.dns-8134.svc jessie_tcp@dns-test-service.dns-8134.svc jessie_tcp@_http._tcp.dns-test-service.dns-8134.svc]

Oct 23 11:06:44.214: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:44.253: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:44.292: INFO: Unable to read wheezy_udp@dns-test-service.dns-8134 from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:44.330: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8134 from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:44.372: INFO: Unable to read wheezy_udp@dns-test-service.dns-8134.svc from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:44.411: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8134.svc from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:44.771: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:44.812: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:44.852: INFO: Unable to read jessie_udp@dns-test-service.dns-8134 from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:44.891: INFO: Unable to read jessie_tcp@dns-test-service.dns-8134 from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:44.930: INFO: Unable to read jessie_udp@dns-test-service.dns-8134.svc from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:44.970: INFO: Unable to read jessie_tcp@dns-test-service.dns-8134.svc from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:45.296: INFO: Lookups using dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8134 wheezy_tcp@dns-test-service.dns-8134 wheezy_udp@dns-test-service.dns-8134.svc wheezy_tcp@dns-test-service.dns-8134.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8134 jessie_tcp@dns-test-service.dns-8134 jessie_udp@dns-test-service.dns-8134.svc jessie_tcp@dns-test-service.dns-8134.svc]

Oct 23 11:06:49.214: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:49.254: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:49.292: INFO: Unable to read wheezy_udp@dns-test-service.dns-8134 from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:49.331: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8134 from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:49.371: INFO: Unable to read wheezy_udp@dns-test-service.dns-8134.svc from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:49.409: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8134.svc from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:49.767: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:49.806: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:49.844: INFO: Unable to read jessie_udp@dns-test-service.dns-8134 from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:49.884: INFO: Unable to read jessie_tcp@dns-test-service.dns-8134 from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:49.922: INFO: Unable to read jessie_udp@dns-test-service.dns-8134.svc from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:49.961: INFO: Unable to read jessie_tcp@dns-test-service.dns-8134.svc from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:50.283: INFO: Lookups using dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8134 wheezy_tcp@dns-test-service.dns-8134 wheezy_udp@dns-test-service.dns-8134.svc wheezy_tcp@dns-test-service.dns-8134.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8134 jessie_tcp@dns-test-service.dns-8134 jessie_udp@dns-test-service.dns-8134.svc jessie_tcp@dns-test-service.dns-8134.svc]

Oct 23 11:06:54.241: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:54.294: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:54.336: INFO: Unable to read wheezy_udp@dns-test-service.dns-8134 from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:54.390: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8134 from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:54.498: INFO: Unable to read wheezy_udp@dns-test-service.dns-8134.svc from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:54.543: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8134.svc from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:55.265: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:55.305: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:55.445: INFO: Unable to read jessie_udp@dns-test-service.dns-8134 from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:55.491: INFO: Unable to read jessie_tcp@dns-test-service.dns-8134 from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:55.530: INFO: Unable to read jessie_udp@dns-test-service.dns-8134.svc from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:55.574: INFO: Unable to read jessie_tcp@dns-test-service.dns-8134.svc from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:55.896: INFO: Lookups using dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8134 wheezy_tcp@dns-test-service.dns-8134 wheezy_udp@dns-test-service.dns-8134.svc wheezy_tcp@dns-test-service.dns-8134.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8134 jessie_tcp@dns-test-service.dns-8134 jessie_udp@dns-test-service.dns-8134.svc jessie_tcp@dns-test-service.dns-8134.svc]

Oct 23 11:06:59.214: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:59.252: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:59.291: INFO: Unable to read wheezy_udp@dns-test-service.dns-8134 from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:59.336: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8134 from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:59.375: INFO: Unable to read wheezy_udp@dns-test-service.dns-8134.svc from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:59.414: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8134.svc from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:59.766: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:59.804: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:59.842: INFO: Unable to read jessie_udp@dns-test-service.dns-8134 from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:59.881: INFO: Unable to read jessie_tcp@dns-test-service.dns-8134 from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:06:59.919: INFO: Unable to read jessie_udp@dns-test-service.dns-8134.svc from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:07:00.007: INFO: Unable to read jessie_tcp@dns-test-service.dns-8134.svc from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:07:00.623: INFO: Lookups using dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8134 wheezy_tcp@dns-test-service.dns-8134 wheezy_udp@dns-test-service.dns-8134.svc wheezy_tcp@dns-test-service.dns-8134.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8134 jessie_tcp@dns-test-service.dns-8134 jessie_udp@dns-test-service.dns-8134.svc jessie_tcp@dns-test-service.dns-8134.svc]

Oct 23 11:07:04.214: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:07:04.256: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:07:04.296: INFO: Unable to read wheezy_udp@dns-test-service.dns-8134 from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:07:04.338: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8134 from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:07:04.379: INFO: Unable to read wheezy_udp@dns-test-service.dns-8134.svc from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:07:04.420: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8134.svc from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:07:04.782: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:07:04.821: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:07:04.860: INFO: Unable to read jessie_udp@dns-test-service.dns-8134 from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:07:04.899: INFO: Unable to read jessie_tcp@dns-test-service.dns-8134 from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:07:04.938: INFO: Unable to read jessie_udp@dns-test-service.dns-8134.svc from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:07:04.977: INFO: Unable to read jessie_tcp@dns-test-service.dns-8134.svc from pod dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228: the server could not find the requested resource (get pods dns-test-0c030182-e7bc-45e7-9f52-256186d47228)
Oct 23 11:07:05.296: INFO: Lookups using dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8134 wheezy_tcp@dns-test-service.dns-8134 wheezy_udp@dns-test-service.dns-8134.svc wheezy_tcp@dns-test-service.dns-8134.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8134 jessie_tcp@dns-test-service.dns-8134 jessie_udp@dns-test-service.dns-8134.svc jessie_tcp@dns-test-service.dns-8134.svc]

Oct 23 11:07:10.681: INFO: DNS probes using dns-8134/dns-test-0c030182-e7bc-45e7-9f52-256186d47228 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:07:11.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8134" for this suite.
•{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":306,"completed":173,"skipped":3056,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}

------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 23 11:07:11.776: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2e333fd7-432c-43c6-b69d-d1e65836726f" in namespace "projected-8376" to be "Succeeded or Failed"
Oct 23 11:07:11.814: INFO: Pod "downwardapi-volume-2e333fd7-432c-43c6-b69d-d1e65836726f": Phase="Pending", Reason="", readiness=false. Elapsed: 37.424117ms
Oct 23 11:07:13.852: INFO: Pod "downwardapi-volume-2e333fd7-432c-43c6-b69d-d1e65836726f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.075246843s
STEP: Saw pod success
Oct 23 11:07:13.852: INFO: Pod "downwardapi-volume-2e333fd7-432c-43c6-b69d-d1e65836726f" satisfied condition "Succeeded or Failed"
Oct 23 11:07:13.890: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod downwardapi-volume-2e333fd7-432c-43c6-b69d-d1e65836726f container client-container: <nil>
STEP: delete the pod
Oct 23 11:07:13.991: INFO: Waiting for pod downwardapi-volume-2e333fd7-432c-43c6-b69d-d1e65836726f to disappear
Oct 23 11:07:14.028: INFO: Pod downwardapi-volume-2e333fd7-432c-43c6-b69d-d1e65836726f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:07:14.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8376" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":306,"completed":174,"skipped":3056,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Pods
... skipping 3 lines ...
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 23 11:07:16.817: INFO: Waiting up to 5m0s for pod "client-envvars-7ffcc92a-cf14-4323-bf7c-36a966301cba" in namespace "pods-6896" to be "Succeeded or Failed"
Oct 23 11:07:16.882: INFO: Pod "client-envvars-7ffcc92a-cf14-4323-bf7c-36a966301cba": Phase="Pending", Reason="", readiness=false. Elapsed: 64.732773ms
Oct 23 11:07:18.936: INFO: Pod "client-envvars-7ffcc92a-cf14-4323-bf7c-36a966301cba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.119428772s
STEP: Saw pod success
Oct 23 11:07:18.936: INFO: Pod "client-envvars-7ffcc92a-cf14-4323-bf7c-36a966301cba" satisfied condition "Succeeded or Failed"
Oct 23 11:07:18.980: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod client-envvars-7ffcc92a-cf14-4323-bf7c-36a966301cba container env3cont: <nil>
STEP: delete the pod
Oct 23 11:07:19.210: INFO: Waiting for pod client-envvars-7ffcc92a-cf14-4323-bf7c-36a966301cba to disappear
Oct 23 11:07:19.488: INFO: Pod client-envvars-7ffcc92a-cf14-4323-bf7c-36a966301cba no longer exists
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:07:19.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6896" for this suite.
•{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":306,"completed":175,"skipped":3060,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name projected-secret-test-37dedeee-249a-44a7-8dd8-bb706f2e93af
STEP: Creating a pod to test consume secrets
Oct 23 11:07:20.280: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1d5dd3f6-fbe9-4eac-95b4-e736e84fb0a7" in namespace "projected-92" to be "Succeeded or Failed"
Oct 23 11:07:20.352: INFO: Pod "pod-projected-secrets-1d5dd3f6-fbe9-4eac-95b4-e736e84fb0a7": Phase="Pending", Reason="", readiness=false. Elapsed: 71.812168ms
Oct 23 11:07:22.390: INFO: Pod "pod-projected-secrets-1d5dd3f6-fbe9-4eac-95b4-e736e84fb0a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.109361673s
STEP: Saw pod success
Oct 23 11:07:22.390: INFO: Pod "pod-projected-secrets-1d5dd3f6-fbe9-4eac-95b4-e736e84fb0a7" satisfied condition "Succeeded or Failed"
Oct 23 11:07:22.427: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-projected-secrets-1d5dd3f6-fbe9-4eac-95b4-e736e84fb0a7 container secret-volume-test: <nil>
STEP: delete the pod
Oct 23 11:07:22.518: INFO: Waiting for pod pod-projected-secrets-1d5dd3f6-fbe9-4eac-95b4-e736e84fb0a7 to disappear
Oct 23 11:07:22.554: INFO: Pod pod-projected-secrets-1d5dd3f6-fbe9-4eac-95b4-e736e84fb0a7 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:07:22.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-92" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":306,"completed":176,"skipped":3082,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected combined
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-projected-all-test-volume-a476bbfc-7239-4830-b9ee-bf7af79bf996
STEP: Creating secret with name secret-projected-all-test-volume-107df45f-312c-4554-8436-2cfcb9c0cc4a
STEP: Creating a pod to test Check all projections for projected volume plugin
Oct 23 11:07:22.950: INFO: Waiting up to 5m0s for pod "projected-volume-b6e7e17f-0af4-4b7f-8a17-d12e668a72f3" in namespace "projected-5502" to be "Succeeded or Failed"
Oct 23 11:07:23.001: INFO: Pod "projected-volume-b6e7e17f-0af4-4b7f-8a17-d12e668a72f3": Phase="Pending", Reason="", readiness=false. Elapsed: 50.784405ms
Oct 23 11:07:25.088: INFO: Pod "projected-volume-b6e7e17f-0af4-4b7f-8a17-d12e668a72f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.137409518s
STEP: Saw pod success
Oct 23 11:07:25.088: INFO: Pod "projected-volume-b6e7e17f-0af4-4b7f-8a17-d12e668a72f3" satisfied condition "Succeeded or Failed"
Oct 23 11:07:25.224: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod projected-volume-b6e7e17f-0af4-4b7f-8a17-d12e668a72f3 container projected-all-volume-test: <nil>
STEP: delete the pod
Oct 23 11:07:25.384: INFO: Waiting for pod projected-volume-b6e7e17f-0af4-4b7f-8a17-d12e668a72f3 to disappear
Oct 23 11:07:25.422: INFO: Pod projected-volume-b6e7e17f-0af4-4b7f-8a17-d12e668a72f3 no longer exists
[AfterEach] [sig-storage] Projected combined
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:07:25.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5502" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":306,"completed":177,"skipped":3086,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 23 11:07:25.715: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:07:28.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4028" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":306,"completed":178,"skipped":3087,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 15 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:07:38.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8647" for this suite.
STEP: Destroying namespace "nsdeletetest-6188" for this suite.
Oct 23 11:07:38.632: INFO: Namespace nsdeletetest-6188 was already deleted
STEP: Destroying namespace "nsdeletetest-2184" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":306,"completed":179,"skipped":3096,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name projected-secret-test-map-a1c89a26-edca-49c8-9ac6-bbac154d7b76
STEP: Creating a pod to test consume secrets
Oct 23 11:07:38.940: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d98085ef-a2a1-4429-8987-35869b4ce510" in namespace "projected-596" to be "Succeeded or Failed"
Oct 23 11:07:38.977: INFO: Pod "pod-projected-secrets-d98085ef-a2a1-4429-8987-35869b4ce510": Phase="Pending", Reason="", readiness=false. Elapsed: 37.01777ms
Oct 23 11:07:41.015: INFO: Pod "pod-projected-secrets-d98085ef-a2a1-4429-8987-35869b4ce510": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.075634409s
STEP: Saw pod success
Oct 23 11:07:41.015: INFO: Pod "pod-projected-secrets-d98085ef-a2a1-4429-8987-35869b4ce510" satisfied condition "Succeeded or Failed"
Oct 23 11:07:41.054: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-projected-secrets-d98085ef-a2a1-4429-8987-35869b4ce510 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 23 11:07:41.144: INFO: Waiting for pod pod-projected-secrets-d98085ef-a2a1-4429-8987-35869b4ce510 to disappear
Oct 23 11:07:41.215: INFO: Pod pod-projected-secrets-d98085ef-a2a1-4429-8987-35869b4ce510 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:07:41.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-596" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":306,"completed":180,"skipped":3124,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 22 lines ...
Oct 23 11:07:45.607: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Oct 23 11:07:45.607: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=kubectl-6851 describe pod agnhost-primary-48mnd'
Oct 23 11:07:45.891: INFO: stderr: ""
Oct 23 11:07:45.891: INFO: stdout: "Name:         agnhost-primary-48mnd\nNamespace:    kubectl-6851\nPriority:     0\nNode:         bootstrap-e2e-minion-group-0324/10.138.0.4\nStart Time:   Fri, 23 Oct 2020 11:07:42 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           10.64.1.149\nIPs:\n  IP:           10.64.1.149\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://ad01cdef97b6aa0e3279959cf734c200a3b485a7ba902239e558dbc3c82e9ab1\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.21\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 23 Oct 2020 11:07:43 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-7ccc2 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-7ccc2:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-7ccc2\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  3s    default-scheduler  Successfully assigned kubectl-6851/agnhost-primary-48mnd to bootstrap-e2e-minion-group-0324\n  Normal  Pulled     2s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n  Normal  Created    2s    kubelet            Created container agnhost-primary\n  Normal  Started    2s    kubelet            Started container agnhost-primary\n"
Oct 23 11:07:45.892: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=kubectl-6851 describe rc agnhost-primary'
Oct 23 11:07:46.249: INFO: stderr: ""
Oct 23 11:07:46.249: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-6851\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.21\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  4s    replication-controller  Created pod: agnhost-primary-48mnd\n"
Oct 23 11:07:46.249: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=kubectl-6851 describe service agnhost-primary'
Oct 23 11:07:46.577: INFO: stderr: ""
Oct 23 11:07:46.577: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-6851\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP:                10.0.117.34\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.64.1.149:6379\nSession Affinity:  None\nEvents:            <none>\n"
Oct 23 11:07:46.616: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=kubectl-6851 describe node bootstrap-e2e-master'
Oct 23 11:07:47.170: INFO: stderr: ""
Oct 23 11:07:47.170: INFO: stdout: "Name:               bootstrap-e2e-master\nRoles:              <none>\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=n1-standard-1\n                    beta.kubernetes.io/os=linux\n                    cloud.google.com/metadata-proxy-ready=true\n                    failure-domain.beta.kubernetes.io/region=us-west1\n                    failure-domain.beta.kubernetes.io/zone=us-west1-b\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=bootstrap-e2e-master\n                    kubernetes.io/os=linux\n                    node.kubernetes.io/instance-type=n1-standard-1\n                    topology.kubernetes.io/region=us-west1\n                    topology.kubernetes.io/zone=us-west1-b\nAnnotations:        node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Fri, 23 Oct 2020 09:47:35 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\n                    node.kubernetes.io/unschedulable:NoSchedule\nUnschedulable:      true\nLease:\n  HolderIdentity:  bootstrap-e2e-master\n  AcquireTime:     <unset>\n  RenewTime:       Fri, 23 Oct 2020 11:07:38 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Fri, 23 Oct 2020 09:47:44 +0000   Fri, 23 Oct 2020 09:47:44 +0000   RouteCreated                 RouteController created a route\n  MemoryPressure       False   Fri, 23 Oct 2020 11:03:26 +0000   Fri, 23 Oct 2020 09:47:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Fri, 23 Oct 2020 11:03:26 +0000   Fri, 23 Oct 2020 09:47:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Fri, 23 Oct 2020 11:03:26 +0000   Fri, 23 Oct 2020 09:47:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Fri, 23 Oct 2020 11:03:26 +0000   Fri, 23 Oct 2020 09:47:35 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   10.138.0.2\n  ExternalIP:   34.82.199.23\n  InternalDNS:  bootstrap-e2e-master.c.k8s-jkns-gci-gce-sd-log.internal\n  Hostname:     bootstrap-e2e-master.c.k8s-jkns-gci-gce-sd-log.internal\nCapacity:\n  attachable-volumes-gce-pd:  127\n  cpu:                        1\n  ephemeral-storage:          16293736Ki\n  hugepages-2Mi:              0\n  memory:                     3776188Ki\n  pods:                       110\nAllocatable:\n  attachable-volumes-gce-pd:  127\n  cpu:                        1\n  ephemeral-storage:          15016307073\n  hugepages-2Mi:              0\n  memory:                     3520188Ki\n  pods:                       110\nSystem Info:\n  Machine ID:                 2992e78d768c5f76bfaddf89661c75dc\n  System UUID:                2992e78d-768c-5f76-bfad-df89661c75dc\n  Boot ID:                    28ac2e50-e450-4038-865c-0aee5b7e1edf\n  Kernel Version:             5.4.49+\n  OS Image:                   Container-Optimized OS from Google\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.4.1\n  Kubelet Version:            v1.20.0-alpha.3.84+3627a282799b32\n  Kube-Proxy Version:         v1.20.0-alpha.3.84+3627a282799b32\nPodCIDR:                      10.64.0.0/24\nPodCIDRs:                     10.64.0.0/24\nProviderID:                   gce://k8s-jkns-gci-gce-sd-log/us-west1-b/bootstrap-e2e-master\nNon-terminated Pods:          (8 in total)\n  Namespace                   Name                                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                            ------------  ----------  ---------------  -------------  ---\n  kube-system                 etcd-server-bootstrap-e2e-master                200m (20%)    0 (0%)      0 (0%)           0 (0%)         79m\n  kube-system                 etcd-server-events-bootstrap-e2e-master         100m (10%)    0 (0%)      0 (0%)           0 (0%)         79m\n  kube-system                 kube-addon-manager-bootstrap-e2e-master         5m (0%)       0 (0%)      50Mi (1%)        0 (0%)         79m\n  kube-system                 kube-apiserver-bootstrap-e2e-master             250m (25%)    0 (0%)      0 (0%)           0 (0%)         79m\n  kube-system                 kube-controller-manager-bootstrap-e2e-master    200m (20%)    0 (0%)      0 (0%)           0 (0%)         80m\n  kube-system                 kube-scheduler-bootstrap-e2e-master             75m (7%)      0 (0%)      0 (0%)           0 (0%)         80m\n  kube-system                 l7-lb-controller-bootstrap-e2e-master           10m (1%)      0 (0%)      50Mi (1%)        0 (0%)         80m\n  kube-system                 metadata-proxy-v0.1-542fb                       32m (3%)      32m (3%)    45Mi (1%)        45Mi (1%)      80m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                   Requests    Limits\n  --------                   --------    ------\n  cpu                        872m (87%)  32m (3%)\n  memory                     145Mi (4%)  45Mi (1%)\n  ephemeral-storage          0 (0%)      0 (0%)\n  hugepages-2Mi              0 (0%)      0 (0%)\n  attachable-volumes-gce-pd  0           0\nEvents:                      <none>\n"
Oct 23 11:07:47.170: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=kubectl-6851 describe namespace kubectl-6851'
Oct 23 11:07:47.557: INFO: stderr: ""
Oct 23 11:07:47.557: INFO: stdout: "Name:         kubectl-6851\nLabels:       e2e-framework=kubectl\n              e2e-run=4769fc99-090d-4e1d-972f-598b49164a1a\nAnnotations:  <none>\nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:07:47.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6851" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":306,"completed":181,"skipped":3136,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 23 11:07:47.970: INFO: Waiting up to 5m0s for pod "downwardapi-volume-053a8804-ef6d-4681-b7e7-375613988905" in namespace "projected-3584" to be "Succeeded or Failed"
Oct 23 11:07:48.020: INFO: Pod "downwardapi-volume-053a8804-ef6d-4681-b7e7-375613988905": Phase="Pending", Reason="", readiness=false. Elapsed: 50.236041ms
Oct 23 11:07:50.057: INFO: Pod "downwardapi-volume-053a8804-ef6d-4681-b7e7-375613988905": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.087920229s
STEP: Saw pod success
Oct 23 11:07:50.058: INFO: Pod "downwardapi-volume-053a8804-ef6d-4681-b7e7-375613988905" satisfied condition "Succeeded or Failed"
Oct 23 11:07:50.096: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod downwardapi-volume-053a8804-ef6d-4681-b7e7-375613988905 container client-container: <nil>
STEP: delete the pod
Oct 23 11:07:50.186: INFO: Waiting for pod downwardapi-volume-053a8804-ef6d-4681-b7e7-375613988905 to disappear
Oct 23 11:07:50.251: INFO: Pod downwardapi-volume-053a8804-ef6d-4681-b7e7-375613988905 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:07:50.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3584" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":306,"completed":182,"skipped":3138,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 25 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:08:13.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2718" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":306,"completed":183,"skipped":3150,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] [sig-node] Events
... skipping 16 lines ...
Oct 23 11:08:19.661: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:08:19.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-6521" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":306,"completed":184,"skipped":3167,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Oct 23 11:08:19.783: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test override arguments
Oct 23 11:08:20.018: INFO: Waiting up to 5m0s for pod "client-containers-90ab2ffe-ed06-4acc-9c1f-4a89340f6f50" in namespace "containers-9021" to be "Succeeded or Failed"
Oct 23 11:08:20.114: INFO: Pod "client-containers-90ab2ffe-ed06-4acc-9c1f-4a89340f6f50": Phase="Pending", Reason="", readiness=false. Elapsed: 95.689443ms
Oct 23 11:08:22.153: INFO: Pod "client-containers-90ab2ffe-ed06-4acc-9c1f-4a89340f6f50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.134335804s
STEP: Saw pod success
Oct 23 11:08:22.153: INFO: Pod "client-containers-90ab2ffe-ed06-4acc-9c1f-4a89340f6f50" satisfied condition "Succeeded or Failed"
Oct 23 11:08:22.191: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod client-containers-90ab2ffe-ed06-4acc-9c1f-4a89340f6f50 container agnhost-container: <nil>
STEP: delete the pod
Oct 23 11:08:22.282: INFO: Waiting for pod client-containers-90ab2ffe-ed06-4acc-9c1f-4a89340f6f50 to disappear
Oct 23 11:08:22.325: INFO: Pod client-containers-90ab2ffe-ed06-4acc-9c1f-4a89340f6f50 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:08:22.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9021" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":306,"completed":185,"skipped":3175,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
S
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Runtime
... skipping 20 lines ...
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:08:46.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9996" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":306,"completed":186,"skipped":3176,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSS
------------------------------
[sig-instrumentation] Events API 
  should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-instrumentation] Events API
... skipping 20 lines ...
STEP: listing events in all namespaces
STEP: listing events in test namespace
[AfterEach] [sig-instrumentation] Events API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:08:47.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2524" for this suite.
•{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":306,"completed":187,"skipped":3182,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] 
  validates basic preemption works [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 17 lines ...
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:10:03.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-preemption-4165" for this suite.
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":306,"completed":188,"skipped":3188,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 23 11:10:03.521: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0777 on tmpfs
Oct 23 11:10:03.757: INFO: Waiting up to 5m0s for pod "pod-bc755909-1e56-46e8-b7c6-8542ef0d1714" in namespace "emptydir-4810" to be "Succeeded or Failed"
Oct 23 11:10:03.795: INFO: Pod "pod-bc755909-1e56-46e8-b7c6-8542ef0d1714": Phase="Pending", Reason="", readiness=false. Elapsed: 37.96764ms
Oct 23 11:10:05.846: INFO: Pod "pod-bc755909-1e56-46e8-b7c6-8542ef0d1714": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.089370707s
STEP: Saw pod success
Oct 23 11:10:05.846: INFO: Pod "pod-bc755909-1e56-46e8-b7c6-8542ef0d1714" satisfied condition "Succeeded or Failed"
Oct 23 11:10:05.899: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-bc755909-1e56-46e8-b7c6-8542ef0d1714 container test-container: <nil>
STEP: delete the pod
Oct 23 11:10:06.161: INFO: Waiting for pod pod-bc755909-1e56-46e8-b7c6-8542ef0d1714 to disappear
Oct 23 11:10:06.211: INFO: Pod pod-bc755909-1e56-46e8-b7c6-8542ef0d1714 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:10:06.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4810" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":189,"skipped":3190,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:10:16.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-548" for this suite.
STEP: Destroying namespace "webhook-548-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":306,"completed":190,"skipped":3217,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Discovery 
  should validate PreferredVersion for each APIGroup [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Discovery
... skipping 96 lines ...
Oct 23 11:10:18.795: INFO: Versions found [{metrics.k8s.io/v1beta1 v1beta1}]
Oct 23 11:10:18.795: INFO: metrics.k8s.io/v1beta1 matches metrics.k8s.io/v1beta1
[AfterEach] [sig-api-machinery] Discovery
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:10:18.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-8746" for this suite.
•{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":306,"completed":191,"skipped":3231,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 23 11:10:19.117: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a6b6ec3b-e494-44c8-88d8-c5626b6c2de1" in namespace "projected-7013" to be "Succeeded or Failed"
Oct 23 11:10:19.179: INFO: Pod "downwardapi-volume-a6b6ec3b-e494-44c8-88d8-c5626b6c2de1": Phase="Pending", Reason="", readiness=false. Elapsed: 61.098965ms
Oct 23 11:10:21.216: INFO: Pod "downwardapi-volume-a6b6ec3b-e494-44c8-88d8-c5626b6c2de1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.098825521s
STEP: Saw pod success
Oct 23 11:10:21.216: INFO: Pod "downwardapi-volume-a6b6ec3b-e494-44c8-88d8-c5626b6c2de1" satisfied condition "Succeeded or Failed"
Oct 23 11:10:21.256: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod downwardapi-volume-a6b6ec3b-e494-44c8-88d8-c5626b6c2de1 container client-container: <nil>
STEP: delete the pod
Oct 23 11:10:21.361: INFO: Waiting for pod downwardapi-volume-a6b6ec3b-e494-44c8-88d8-c5626b6c2de1 to disappear
Oct 23 11:10:21.401: INFO: Pod downwardapi-volume-a6b6ec3b-e494-44c8-88d8-c5626b6c2de1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:10:21.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7013" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":306,"completed":192,"skipped":3250,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Watchers
... skipping 26 lines ...
Oct 23 11:11:12.195: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-3356  010ad5d1-1b98-4b3c-bdb6-ec137aae722b 18733 0 2020-10-23 11:11:02 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-10-23 11:11:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Oct 23 11:11:12.195: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-3356  010ad5d1-1b98-4b3c-bdb6-ec137aae722b 18733 0 2020-10-23 11:11:02 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-10-23 11:11:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:11:22.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3356" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":306,"completed":193,"skipped":3273,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 34 lines ...
Oct 23 11:11:28.296: INFO: stdout: "service/rm3 exposed\n"
Oct 23 11:11:28.335: INFO: Service rm3 in namespace kubectl-6542 found.
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:11:30.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6542" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":306,"completed":194,"skipped":3277,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 23 11:11:30.496: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 23 11:11:32.860: INFO: Deleting pod "var-expansion-4116ccb0-1cae-4986-a60a-1ab62ad68bbe" in namespace "var-expansion-2245"
Oct 23 11:11:32.917: INFO: Wait up to 5m0s for pod "var-expansion-4116ccb0-1cae-4986-a60a-1ab62ad68bbe" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:11:53.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2245" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":306,"completed":195,"skipped":3302,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 12 lines ...
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:11:53.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7687" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":306,"completed":196,"skipped":3321,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating pod pod-subpath-test-configmap-xjjx
STEP: Creating a pod to test atomic-volume-subpath
Oct 23 11:11:54.145: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-xjjx" in namespace "subpath-8193" to be "Succeeded or Failed"
Oct 23 11:11:54.182: INFO: Pod "pod-subpath-test-configmap-xjjx": Phase="Pending", Reason="", readiness=false. Elapsed: 37.41581ms
Oct 23 11:11:56.225: INFO: Pod "pod-subpath-test-configmap-xjjx": Phase="Running", Reason="", readiness=true. Elapsed: 2.080446572s
Oct 23 11:11:58.288: INFO: Pod "pod-subpath-test-configmap-xjjx": Phase="Running", Reason="", readiness=true. Elapsed: 4.142931667s
Oct 23 11:12:00.331: INFO: Pod "pod-subpath-test-configmap-xjjx": Phase="Running", Reason="", readiness=true. Elapsed: 6.1859664s
Oct 23 11:12:02.370: INFO: Pod "pod-subpath-test-configmap-xjjx": Phase="Running", Reason="", readiness=true. Elapsed: 8.225274526s
Oct 23 11:12:04.437: INFO: Pod "pod-subpath-test-configmap-xjjx": Phase="Running", Reason="", readiness=true. Elapsed: 10.292420715s
Oct 23 11:12:06.475: INFO: Pod "pod-subpath-test-configmap-xjjx": Phase="Running", Reason="", readiness=true. Elapsed: 12.330158902s
Oct 23 11:12:08.513: INFO: Pod "pod-subpath-test-configmap-xjjx": Phase="Running", Reason="", readiness=true. Elapsed: 14.367955976s
Oct 23 11:12:10.577: INFO: Pod "pod-subpath-test-configmap-xjjx": Phase="Running", Reason="", readiness=true. Elapsed: 16.432617696s
Oct 23 11:12:12.615: INFO: Pod "pod-subpath-test-configmap-xjjx": Phase="Running", Reason="", readiness=true. Elapsed: 18.470478939s
Oct 23 11:12:14.659: INFO: Pod "pod-subpath-test-configmap-xjjx": Phase="Running", Reason="", readiness=true. Elapsed: 20.514641636s
Oct 23 11:12:16.730: INFO: Pod "pod-subpath-test-configmap-xjjx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.585580049s
STEP: Saw pod success
Oct 23 11:12:16.730: INFO: Pod "pod-subpath-test-configmap-xjjx" satisfied condition "Succeeded or Failed"
Oct 23 11:12:16.783: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-subpath-test-configmap-xjjx container test-container-subpath-configmap-xjjx: <nil>
STEP: delete the pod
Oct 23 11:12:17.278: INFO: Waiting for pod pod-subpath-test-configmap-xjjx to disappear
Oct 23 11:12:17.343: INFO: Pod pod-subpath-test-configmap-xjjx no longer exists
STEP: Deleting pod pod-subpath-test-configmap-xjjx
Oct 23 11:12:17.343: INFO: Deleting pod "pod-subpath-test-configmap-xjjx" in namespace "subpath-8193"
[AfterEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:12:17.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8193" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":306,"completed":197,"skipped":3336,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:12:20.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2078" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":306,"completed":198,"skipped":3347,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:12:30.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-974" for this suite.
STEP: Destroying namespace "webhook-974-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":306,"completed":199,"skipped":3370,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-d08f0afd-e447-43b3-b83b-ff31258b0adf
STEP: Creating a pod to test consume configMaps
Oct 23 11:12:31.251: INFO: Waiting up to 5m0s for pod "pod-configmaps-de26bc41-6b97-4983-ae55-a90bd0562119" in namespace "configmap-1850" to be "Succeeded or Failed"
Oct 23 11:12:31.296: INFO: Pod "pod-configmaps-de26bc41-6b97-4983-ae55-a90bd0562119": Phase="Pending", Reason="", readiness=false. Elapsed: 44.721995ms
Oct 23 11:12:33.335: INFO: Pod "pod-configmaps-de26bc41-6b97-4983-ae55-a90bd0562119": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.084196834s
STEP: Saw pod success
Oct 23 11:12:33.335: INFO: Pod "pod-configmaps-de26bc41-6b97-4983-ae55-a90bd0562119" satisfied condition "Succeeded or Failed"
Oct 23 11:12:33.372: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-configmaps-de26bc41-6b97-4983-ae55-a90bd0562119 container configmap-volume-test: <nil>
STEP: delete the pod
Oct 23 11:12:33.488: INFO: Waiting for pod pod-configmaps-de26bc41-6b97-4983-ae55-a90bd0562119 to disappear
Oct 23 11:12:33.530: INFO: Pod pod-configmaps-de26bc41-6b97-4983-ae55-a90bd0562119 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:12:33.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1850" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":200,"skipped":3371,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:12:36.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6312" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":306,"completed":201,"skipped":3419,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating pod pod-subpath-test-secret-wnrl
STEP: Creating a pod to test atomic-volume-subpath
Oct 23 11:12:37.419: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-wnrl" in namespace "subpath-71" to be "Succeeded or Failed"
Oct 23 11:12:37.521: INFO: Pod "pod-subpath-test-secret-wnrl": Phase="Pending", Reason="", readiness=false. Elapsed: 101.8644ms
Oct 23 11:12:39.558: INFO: Pod "pod-subpath-test-secret-wnrl": Phase="Running", Reason="", readiness=true. Elapsed: 2.139287936s
Oct 23 11:12:41.596: INFO: Pod "pod-subpath-test-secret-wnrl": Phase="Running", Reason="", readiness=true. Elapsed: 4.176997347s
Oct 23 11:12:43.709: INFO: Pod "pod-subpath-test-secret-wnrl": Phase="Running", Reason="", readiness=true. Elapsed: 6.290645841s
Oct 23 11:12:45.828: INFO: Pod "pod-subpath-test-secret-wnrl": Phase="Running", Reason="", readiness=true. Elapsed: 8.409310588s
Oct 23 11:12:47.887: INFO: Pod "pod-subpath-test-secret-wnrl": Phase="Running", Reason="", readiness=true. Elapsed: 10.467967343s
Oct 23 11:12:49.925: INFO: Pod "pod-subpath-test-secret-wnrl": Phase="Running", Reason="", readiness=true. Elapsed: 12.506044583s
Oct 23 11:12:51.964: INFO: Pod "pod-subpath-test-secret-wnrl": Phase="Running", Reason="", readiness=true. Elapsed: 14.54535608s
Oct 23 11:12:54.002: INFO: Pod "pod-subpath-test-secret-wnrl": Phase="Running", Reason="", readiness=true. Elapsed: 16.583533478s
Oct 23 11:12:56.050: INFO: Pod "pod-subpath-test-secret-wnrl": Phase="Running", Reason="", readiness=true. Elapsed: 18.630953985s
Oct 23 11:12:58.124: INFO: Pod "pod-subpath-test-secret-wnrl": Phase="Running", Reason="", readiness=true. Elapsed: 20.704908576s
Oct 23 11:13:00.167: INFO: Pod "pod-subpath-test-secret-wnrl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.748503132s
STEP: Saw pod success
Oct 23 11:13:00.167: INFO: Pod "pod-subpath-test-secret-wnrl" satisfied condition "Succeeded or Failed"
Oct 23 11:13:00.205: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-subpath-test-secret-wnrl container test-container-subpath-secret-wnrl: <nil>
STEP: delete the pod
Oct 23 11:13:00.293: INFO: Waiting for pod pod-subpath-test-secret-wnrl to disappear
Oct 23 11:13:00.331: INFO: Pod pod-subpath-test-secret-wnrl no longer exists
STEP: Deleting pod pod-subpath-test-secret-wnrl
Oct 23 11:13:00.331: INFO: Deleting pod "pod-subpath-test-secret-wnrl" in namespace "subpath-71"
[AfterEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:13:00.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-71" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":306,"completed":202,"skipped":3451,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 23 11:13:00.448: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir volume type on tmpfs
Oct 23 11:13:00.679: INFO: Waiting up to 5m0s for pod "pod-0a5b0a65-c143-4dc3-b0fe-4d86d67cabb2" in namespace "emptydir-6695" to be "Succeeded or Failed"
Oct 23 11:13:00.720: INFO: Pod "pod-0a5b0a65-c143-4dc3-b0fe-4d86d67cabb2": Phase="Pending", Reason="", readiness=false. Elapsed: 40.522411ms
Oct 23 11:13:02.763: INFO: Pod "pod-0a5b0a65-c143-4dc3-b0fe-4d86d67cabb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.083551741s
STEP: Saw pod success
Oct 23 11:13:02.763: INFO: Pod "pod-0a5b0a65-c143-4dc3-b0fe-4d86d67cabb2" satisfied condition "Succeeded or Failed"
Oct 23 11:13:02.801: INFO: Trying to get logs from node bootstrap-e2e-minion-group-xbjm pod pod-0a5b0a65-c143-4dc3-b0fe-4d86d67cabb2 container test-container: <nil>
STEP: delete the pod
Oct 23 11:13:02.923: INFO: Waiting for pod pod-0a5b0a65-c143-4dc3-b0fe-4d86d67cabb2 to disappear
Oct 23 11:13:02.962: INFO: Pod pod-0a5b0a65-c143-4dc3-b0fe-4d86d67cabb2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:13:02.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6695" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":203,"skipped":3456,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 23 11:13:03.318: INFO: Waiting up to 5m0s for pod "downwardapi-volume-988165b7-65bf-4c51-a303-5eb21f999888" in namespace "downward-api-5549" to be "Succeeded or Failed"
Oct 23 11:13:03.415: INFO: Pod "downwardapi-volume-988165b7-65bf-4c51-a303-5eb21f999888": Phase="Pending", Reason="", readiness=false. Elapsed: 96.295541ms
Oct 23 11:13:05.469: INFO: Pod "downwardapi-volume-988165b7-65bf-4c51-a303-5eb21f999888": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.150626316s
STEP: Saw pod success
Oct 23 11:13:05.469: INFO: Pod "downwardapi-volume-988165b7-65bf-4c51-a303-5eb21f999888" satisfied condition "Succeeded or Failed"
Oct 23 11:13:05.546: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod downwardapi-volume-988165b7-65bf-4c51-a303-5eb21f999888 container client-container: <nil>
STEP: delete the pod
Oct 23 11:13:05.702: INFO: Waiting for pod downwardapi-volume-988165b7-65bf-4c51-a303-5eb21f999888 to disappear
Oct 23 11:13:05.743: INFO: Pod downwardapi-volume-988165b7-65bf-4c51-a303-5eb21f999888 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:13:05.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5549" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":306,"completed":204,"skipped":3481,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 26 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:13:12.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2913" for this suite.
STEP: Destroying namespace "webhook-2913-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":306,"completed":205,"skipped":3489,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 28 lines ...
Oct 23 11:13:32.931: INFO: stderr: ""
Oct 23 11:13:32.931: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:13:32.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9385" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":306,"completed":206,"skipped":3513,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 23 11:13:33.011: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir volume type on node default medium
Oct 23 11:13:33.242: INFO: Waiting up to 5m0s for pod "pod-2b4e32b8-ed2e-4ed3-ab54-a0971c8dd7d2" in namespace "emptydir-5516" to be "Succeeded or Failed"
Oct 23 11:13:33.279: INFO: Pod "pod-2b4e32b8-ed2e-4ed3-ab54-a0971c8dd7d2": Phase="Pending", Reason="", readiness=false. Elapsed: 37.365477ms
Oct 23 11:13:35.318: INFO: Pod "pod-2b4e32b8-ed2e-4ed3-ab54-a0971c8dd7d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.075806385s
STEP: Saw pod success
Oct 23 11:13:35.318: INFO: Pod "pod-2b4e32b8-ed2e-4ed3-ab54-a0971c8dd7d2" satisfied condition "Succeeded or Failed"
Oct 23 11:13:35.355: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-2b4e32b8-ed2e-4ed3-ab54-a0971c8dd7d2 container test-container: <nil>
STEP: delete the pod
Oct 23 11:13:35.454: INFO: Waiting for pod pod-2b4e32b8-ed2e-4ed3-ab54-a0971c8dd7d2 to disappear
Oct 23 11:13:35.492: INFO: Pod pod-2b4e32b8-ed2e-4ed3-ab54-a0971c8dd7d2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:13:35.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5516" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":207,"skipped":3526,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 25 lines ...
Oct 23 11:13:52.329: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Oct 23 11:13:52.388: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:13:52.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7999" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":306,"completed":208,"skipped":3532,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] version v1
... skipping 336 lines ...
Oct 23 11:13:57.742: INFO: Deleting ReplicationController proxy-service-29sjt took: 69.778895ms
Oct 23 11:13:58.442: INFO: Terminating ReplicationController proxy-service-29sjt pods took: 700.232633ms
[AfterEach] version v1
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:14:11.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3098" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":306,"completed":209,"skipped":3587,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-96aabea4-9b74-473d-acc3-68abf99e8124
STEP: Creating a pod to test consume secrets
Oct 23 11:14:11.894: INFO: Waiting up to 5m0s for pod "pod-secrets-a9fdacdc-4553-4961-8f27-2e5e8c70e6d6" in namespace "secrets-8222" to be "Succeeded or Failed"
Oct 23 11:14:11.931: INFO: Pod "pod-secrets-a9fdacdc-4553-4961-8f27-2e5e8c70e6d6": Phase="Pending", Reason="", readiness=false. Elapsed: 37.036356ms
Oct 23 11:14:13.970: INFO: Pod "pod-secrets-a9fdacdc-4553-4961-8f27-2e5e8c70e6d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.075864975s
STEP: Saw pod success
Oct 23 11:14:13.970: INFO: Pod "pod-secrets-a9fdacdc-4553-4961-8f27-2e5e8c70e6d6" satisfied condition "Succeeded or Failed"
Oct 23 11:14:14.008: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-secrets-a9fdacdc-4553-4961-8f27-2e5e8c70e6d6 container secret-env-test: <nil>
STEP: delete the pod
Oct 23 11:14:14.200: INFO: Waiting for pod pod-secrets-a9fdacdc-4553-4961-8f27-2e5e8c70e6d6 to disappear
Oct 23 11:14:14.239: INFO: Pod pod-secrets-a9fdacdc-4553-4961-8f27-2e5e8c70e6d6 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:14:14.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8222" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":306,"completed":210,"skipped":3590,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-fc5969b3-8b5f-4b8b-84be-65ffa9b2f663
STEP: Creating a pod to test consume secrets
Oct 23 11:14:14.605: INFO: Waiting up to 5m0s for pod "pod-secrets-3a4195a0-fd48-4d5b-9dbd-6c08afe4badd" in namespace "secrets-4414" to be "Succeeded or Failed"
Oct 23 11:14:14.645: INFO: Pod "pod-secrets-3a4195a0-fd48-4d5b-9dbd-6c08afe4badd": Phase="Pending", Reason="", readiness=false. Elapsed: 40.114206ms
Oct 23 11:14:16.684: INFO: Pod "pod-secrets-3a4195a0-fd48-4d5b-9dbd-6c08afe4badd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.078893305s
STEP: Saw pod success
Oct 23 11:14:16.684: INFO: Pod "pod-secrets-3a4195a0-fd48-4d5b-9dbd-6c08afe4badd" satisfied condition "Succeeded or Failed"
Oct 23 11:14:16.723: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-secrets-3a4195a0-fd48-4d5b-9dbd-6c08afe4badd container secret-volume-test: <nil>
STEP: delete the pod
Oct 23 11:14:16.854: INFO: Waiting for pod pod-secrets-3a4195a0-fd48-4d5b-9dbd-6c08afe4badd to disappear
Oct 23 11:14:16.894: INFO: Pod pod-secrets-3a4195a0-fd48-4d5b-9dbd-6c08afe4badd no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:14:16.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4414" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":306,"completed":211,"skipped":3596,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 57 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:15:01.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4429" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":306,"completed":212,"skipped":3606,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:15:13.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8561" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":306,"completed":213,"skipped":3630,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-e70e5bd2-19f8-44fe-9cad-3dca1270de83
STEP: Creating a pod to test consume configMaps
Oct 23 11:15:13.632: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-84a63a8f-3cb8-4dba-a67a-f892d3431a02" in namespace "projected-2892" to be "Succeeded or Failed"
Oct 23 11:15:13.684: INFO: Pod "pod-projected-configmaps-84a63a8f-3cb8-4dba-a67a-f892d3431a02": Phase="Pending", Reason="", readiness=false. Elapsed: 52.268062ms
Oct 23 11:15:15.740: INFO: Pod "pod-projected-configmaps-84a63a8f-3cb8-4dba-a67a-f892d3431a02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.10847547s
STEP: Saw pod success
Oct 23 11:15:15.740: INFO: Pod "pod-projected-configmaps-84a63a8f-3cb8-4dba-a67a-f892d3431a02" satisfied condition "Succeeded or Failed"
Oct 23 11:15:15.795: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-projected-configmaps-84a63a8f-3cb8-4dba-a67a-f892d3431a02 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Oct 23 11:15:16.077: INFO: Waiting for pod pod-projected-configmaps-84a63a8f-3cb8-4dba-a67a-f892d3431a02 to disappear
Oct 23 11:15:16.119: INFO: Pod pod-projected-configmaps-84a63a8f-3cb8-4dba-a67a-f892d3431a02 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:15:16.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2892" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":306,"completed":214,"skipped":3654,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 23 11:15:16.364: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename webhook
... skipping 6 lines ...
STEP: Wait for the deployment to be ready
Oct 23 11:15:17.618: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739048517, loc:(*time.Location)(0x774f580)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739048517, loc:(*time.Location)(0x774f580)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739048517, loc:(*time.Location)(0x774f580)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739048517, loc:(*time.Location)(0x774f580)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 23 11:15:19.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739048517, loc:(*time.Location)(0x774f580)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739048517, loc:(*time.Location)(0x774f580)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739048517, loc:(*time.Location)(0x774f580)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739048517, loc:(*time.Location)(0x774f580)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Oct 23 11:15:22.796: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:15:23.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8787" for this suite.
STEP: Destroying namespace "webhook-8787-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":306,"completed":215,"skipped":3659,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Probing container
... skipping 21 lines ...
Oct 23 11:15:47.854: INFO: The status of Pod test-webserver-1fc11c9c-f54d-4f96-ab8d-ba06464638c2 is Running (Ready = true)
Oct 23 11:15:47.929: INFO: Container started at 2020-10-23 11:15:24 +0000 UTC, pod became ready at 2020-10-23 11:15:47 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:15:47.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9659" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":306,"completed":216,"skipped":3663,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Events 
  should delete a collection of events [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Events
... skipping 14 lines ...
STEP: check that the list of events matches the requested quantity
Oct 23 11:15:48.700: INFO: requesting list of events to confirm quantity
[AfterEach] [sig-api-machinery] Events
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:15:48.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2104" for this suite.
•{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":306,"completed":217,"skipped":3673,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 39 lines ...
Oct 23 11:15:57.331: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=crd-publish-openapi-638 explain e2e-test-crd-publish-openapi-4983-crds.spec'
Oct 23 11:15:57.698: INFO: stderr: ""
Oct 23 11:15:57.698: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4983-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Oct 23 11:15:57.699: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=crd-publish-openapi-638 explain e2e-test-crd-publish-openapi-4983-crds.spec.bars'
Oct 23 11:15:58.049: INFO: stderr: ""
Oct 23 11:15:58.049: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4983-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Oct 23 11:15:58.049: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=crd-publish-openapi-638 explain e2e-test-crd-publish-openapi-4983-crds.spec.bars2'
Oct 23 11:15:58.418: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:16:02.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-638" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":306,"completed":218,"skipped":3682,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should have an terminated reason [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [k8s.io] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:16:07.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5715" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":306,"completed":219,"skipped":3706,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] StatefulSet
... skipping 22 lines ...
Oct 23 11:16:38.574: INFO: Waiting for statefulset status.replicas updated to 0
Oct 23 11:16:38.635: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:16:38.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6115" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":306,"completed":220,"skipped":3724,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 23 11:16:39.467: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ea1d844-8410-4ef7-b8f4-09d08d33879f" in namespace "downward-api-7230" to be "Succeeded or Failed"
Oct 23 11:16:39.550: INFO: Pod "downwardapi-volume-8ea1d844-8410-4ef7-b8f4-09d08d33879f": Phase="Pending", Reason="", readiness=false. Elapsed: 83.095893ms
Oct 23 11:16:41.589: INFO: Pod "downwardapi-volume-8ea1d844-8410-4ef7-b8f4-09d08d33879f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.122609s
STEP: Saw pod success
Oct 23 11:16:41.589: INFO: Pod "downwardapi-volume-8ea1d844-8410-4ef7-b8f4-09d08d33879f" satisfied condition "Succeeded or Failed"
Oct 23 11:16:41.627: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod downwardapi-volume-8ea1d844-8410-4ef7-b8f4-09d08d33879f container client-container: <nil>
STEP: delete the pod
Oct 23 11:16:42.089: INFO: Waiting for pod downwardapi-volume-8ea1d844-8410-4ef7-b8f4-09d08d33879f to disappear
Oct 23 11:16:42.251: INFO: Pod downwardapi-volume-8ea1d844-8410-4ef7-b8f4-09d08d33879f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:16:42.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7230" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":221,"skipped":3789,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
Oct 23 11:17:33.805: INFO: Restart count of pod container-probe-803/busybox-5b4c3b27-81d7-4756-9eb1-9b708b7be7e5 is now 1 (49.129933645s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:17:33.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-803" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":306,"completed":222,"skipped":3856,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Oct 23 11:17:33.936: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward api env vars
Oct 23 11:17:34.173: INFO: Waiting up to 5m0s for pod "downward-api-4205ba30-963d-4f85-9d3c-97fc592442ad" in namespace "downward-api-1537" to be "Succeeded or Failed"
Oct 23 11:17:34.215: INFO: Pod "downward-api-4205ba30-963d-4f85-9d3c-97fc592442ad": Phase="Pending", Reason="", readiness=false. Elapsed: 41.820995ms
Oct 23 11:17:36.257: INFO: Pod "downward-api-4205ba30-963d-4f85-9d3c-97fc592442ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.084191444s
STEP: Saw pod success
Oct 23 11:17:36.257: INFO: Pod "downward-api-4205ba30-963d-4f85-9d3c-97fc592442ad" satisfied condition "Succeeded or Failed"
Oct 23 11:17:36.295: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod downward-api-4205ba30-963d-4f85-9d3c-97fc592442ad container dapi-container: <nil>
STEP: delete the pod
Oct 23 11:17:36.504: INFO: Waiting for pod downward-api-4205ba30-963d-4f85-9d3c-97fc592442ad to disappear
Oct 23 11:17:36.542: INFO: Pod downward-api-4205ba30-963d-4f85-9d3c-97fc592442ad no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:17:36.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1537" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":306,"completed":223,"skipped":3870,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 23 11:17:36.909: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27639644-0d62-4622-9625-ffca9c63d5bc" in namespace "downward-api-7573" to be "Succeeded or Failed"
Oct 23 11:17:36.947: INFO: Pod "downwardapi-volume-27639644-0d62-4622-9625-ffca9c63d5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 37.68516ms
Oct 23 11:17:39.014: INFO: Pod "downwardapi-volume-27639644-0d62-4622-9625-ffca9c63d5bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.105023487s
STEP: Saw pod success
Oct 23 11:17:39.014: INFO: Pod "downwardapi-volume-27639644-0d62-4622-9625-ffca9c63d5bc" satisfied condition "Succeeded or Failed"
Oct 23 11:17:39.135: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod downwardapi-volume-27639644-0d62-4622-9625-ffca9c63d5bc container client-container: <nil>
STEP: delete the pod
Oct 23 11:17:39.231: INFO: Waiting for pod downwardapi-volume-27639644-0d62-4622-9625-ffca9c63d5bc to disappear
Oct 23 11:17:39.270: INFO: Pod downwardapi-volume-27639644-0d62-4622-9625-ffca9c63d5bc no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:17:39.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7573" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":306,"completed":224,"skipped":3872,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 23 11:17:39.361: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct 23 11:17:39.595: INFO: Waiting up to 5m0s for pod "pod-ff016515-6678-420e-a4e8-85fc7178917e" in namespace "emptydir-6422" to be "Succeeded or Failed"
Oct 23 11:17:39.632: INFO: Pod "pod-ff016515-6678-420e-a4e8-85fc7178917e": Phase="Pending", Reason="", readiness=false. Elapsed: 37.834719ms
Oct 23 11:17:41.719: INFO: Pod "pod-ff016515-6678-420e-a4e8-85fc7178917e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.124456794s
STEP: Saw pod success
Oct 23 11:17:41.719: INFO: Pod "pod-ff016515-6678-420e-a4e8-85fc7178917e" satisfied condition "Succeeded or Failed"
Oct 23 11:17:41.780: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-ff016515-6678-420e-a4e8-85fc7178917e container test-container: <nil>
STEP: delete the pod
Oct 23 11:17:41.945: INFO: Waiting for pod pod-ff016515-6678-420e-a4e8-85fc7178917e to disappear
Oct 23 11:17:42.023: INFO: Pod pod-ff016515-6678-420e-a4e8-85fc7178917e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:17:42.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6422" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":225,"skipped":3875,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
S
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 47 lines ...
Oct 23 11:17:46.938: INFO: stderr: ""
Oct 23 11:17:46.938: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:17:46.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3376" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":306,"completed":226,"skipped":3876,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 13 lines ...
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:18:04.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2196" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":306,"completed":227,"skipped":3893,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
S
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Deployment
... skipping 20 lines ...
Oct 23 11:18:09.670: INFO: Pod "test-cleanup-deployment-685c4f8568-rpnrb" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-685c4f8568-rpnrb test-cleanup-deployment-685c4f8568- deployment-6243  8d9859db-9b3e-4c72-bbf2-b323beb989ba 20427 0 2020-10-23 11:18:07 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:685c4f8568] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-685c4f8568 fb2ed552-4345-447b-b177-6637a0f949bb 0xc0036cfd47 0xc0036cfd48}] []  [{kube-controller-manager Update v1 2020-10-23 11:18:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb2ed552-4345-447b-b177-6637a0f949bb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-23 11:18:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.1.193\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvz5h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvz5h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvz5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-0324,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-23 11:18:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-23 11:18:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-23 11:18:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-23 11:18:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.4,PodIP:10.64.1.193,StartTime:2020-10-23 11:18:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-23 11:18:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://21557538625aea3566116e43bca04523b697d8feeb52fc48477850a7555e5798,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.1.193,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:18:09.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6243" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":306,"completed":228,"skipped":3894,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 23 11:18:09.763: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0644 on node default medium
Oct 23 11:18:10.068: INFO: Waiting up to 5m0s for pod "pod-b8de5200-7719-42c2-95e2-287419949f6a" in namespace "emptydir-834" to be "Succeeded or Failed"
Oct 23 11:18:10.121: INFO: Pod "pod-b8de5200-7719-42c2-95e2-287419949f6a": Phase="Pending", Reason="", readiness=false. Elapsed: 53.19046ms
Oct 23 11:18:12.177: INFO: Pod "pod-b8de5200-7719-42c2-95e2-287419949f6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.109194702s
STEP: Saw pod success
Oct 23 11:18:12.177: INFO: Pod "pod-b8de5200-7719-42c2-95e2-287419949f6a" satisfied condition "Succeeded or Failed"
Oct 23 11:18:12.221: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-b8de5200-7719-42c2-95e2-287419949f6a container test-container: <nil>
STEP: delete the pod
Oct 23 11:18:12.390: INFO: Waiting for pod pod-b8de5200-7719-42c2-95e2-287419949f6a to disappear
Oct 23 11:18:12.476: INFO: Pod pod-b8de5200-7719-42c2-95e2-287419949f6a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:18:12.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-834" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":229,"skipped":3908,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 23 11:18:13.074: INFO: Waiting up to 5m0s for pod "downwardapi-volume-46a75d7d-0857-4e65-b9cc-5a5033736765" in namespace "downward-api-3636" to be "Succeeded or Failed"
Oct 23 11:18:13.143: INFO: Pod "downwardapi-volume-46a75d7d-0857-4e65-b9cc-5a5033736765": Phase="Pending", Reason="", readiness=false. Elapsed: 69.458228ms
Oct 23 11:18:15.181: INFO: Pod "downwardapi-volume-46a75d7d-0857-4e65-b9cc-5a5033736765": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.107619392s
STEP: Saw pod success
Oct 23 11:18:15.181: INFO: Pod "downwardapi-volume-46a75d7d-0857-4e65-b9cc-5a5033736765" satisfied condition "Succeeded or Failed"
Oct 23 11:18:15.221: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod downwardapi-volume-46a75d7d-0857-4e65-b9cc-5a5033736765 container client-container: <nil>
STEP: delete the pod
Oct 23 11:18:15.310: INFO: Waiting for pod downwardapi-volume-46a75d7d-0857-4e65-b9cc-5a5033736765 to disappear
Oct 23 11:18:15.347: INFO: Pod downwardapi-volume-46a75d7d-0857-4e65-b9cc-5a5033736765 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:18:15.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3636" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":230,"skipped":3943,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 25 lines ...
Oct 23 11:18:32.224: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:18:32.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2768" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":306,"completed":231,"skipped":3950,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}

------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 23 11:18:32.344: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:18:38.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9846" for this suite.
•{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":306,"completed":232,"skipped":3950,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected secret
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-dcf1e926-5064-4dd4-8b35-151abf8e303c
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:18:43.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7644" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":306,"completed":233,"skipped":3965,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Deployment
... skipping 26 lines ...
Oct 23 11:18:48.924: INFO: Pod "test-rolling-update-deployment-6b6bf9df46-2vq7f" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-6b6bf9df46-2vq7f test-rolling-update-deployment-6b6bf9df46- deployment-4593  7b2ee462-f9f9-4b19-9dd0-8a1074ced7d7 20672 0 2020-10-23 11:18:46 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:6b6bf9df46] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-6b6bf9df46 035838d4-59fc-49a6-9114-541dcef4a757 0xc0053d1e57 0xc0053d1e58}] []  [{kube-controller-manager Update v1 2020-10-23 11:18:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"035838d4-59fc-49a6-9114-541dcef4a757\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-23 11:18:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.2.92\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jn6d5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jn6d5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jn6d5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-xbjm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-23 11:18:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-23 11:18:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-23 11:18:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-23 11:18:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.3,PodIP:10.64.2.92,StartTime:2020-10-23 11:18:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-23 11:18:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://fcafe552cae1cdaae2b0482d7f16de4161b3c416be425d7c0ce6cd1914bfe937,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.2.92,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:18:48.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4593" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":306,"completed":234,"skipped":4010,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should delete a collection of pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Pods
... skipping 13 lines ...
STEP: waiting for all 3 pods to be located
STEP: waiting for all pods to be deleted
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:18:50.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1817" for this suite.
•{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":306,"completed":235,"skipped":4029,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] Certificates API [Privileged:ClusterAdmin] 
  should support CSR API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
... skipping 25 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:18:52.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "certificates-3883" for this suite.
•{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":306,"completed":236,"skipped":4050,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
S
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 16 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:19:55.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2823" for this suite.
STEP: Destroying namespace "nsdeletetest-5744" for this suite.
Oct 23 11:19:56.073: INFO: Namespace nsdeletetest-5744 was already deleted
STEP: Destroying namespace "nsdeletetest-8473" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":306,"completed":237,"skipped":4051,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] [sig-node] Pods Extended
... skipping 10 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:19:56.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2983" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":306,"completed":238,"skipped":4054,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
S
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Oct 23 11:19:58.935: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:19:59.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6810" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":306,"completed":239,"skipped":4055,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [k8s.io] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:20:01.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-814" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":240,"skipped":4060,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 23 11:20:01.783: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: creating the pod
Oct 23 11:20:01.976: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:20:07.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5240" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":306,"completed":241,"skipped":4061,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
... skipping 18 lines ...
STEP: Deleting second CR
Oct 23 11:20:59.502: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-23T11:20:18Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-23T11:20:39Z]] name:name2 resourceVersion:21132 uid:1bc65911-711d-45ef-b5be-d6e22c9f0c34] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:21:09.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-3245" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":306,"completed":242,"skipped":4083,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-map-d492f1f4-b449-464a-9206-555eb9cf9450
STEP: Creating a pod to test consume configMaps
Oct 23 11:21:10.999: INFO: Waiting up to 5m0s for pod "pod-configmaps-3248db56-db61-4f51-80b5-dfd17e2ba6b1" in namespace "configmap-9724" to be "Succeeded or Failed"
Oct 23 11:21:11.103: INFO: Pod "pod-configmaps-3248db56-db61-4f51-80b5-dfd17e2ba6b1": Phase="Pending", Reason="", readiness=false. Elapsed: 104.386599ms
Oct 23 11:21:13.146: INFO: Pod "pod-configmaps-3248db56-db61-4f51-80b5-dfd17e2ba6b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.146894097s
STEP: Saw pod success
Oct 23 11:21:13.146: INFO: Pod "pod-configmaps-3248db56-db61-4f51-80b5-dfd17e2ba6b1" satisfied condition "Succeeded or Failed"
Oct 23 11:21:13.185: INFO: Trying to get logs from node bootstrap-e2e-minion-group-xbjm pod pod-configmaps-3248db56-db61-4f51-80b5-dfd17e2ba6b1 container configmap-volume-test: <nil>
STEP: delete the pod
Oct 23 11:21:13.294: INFO: Waiting for pod pod-configmaps-3248db56-db61-4f51-80b5-dfd17e2ba6b1 to disappear
Oct 23 11:21:13.336: INFO: Pod pod-configmaps-3248db56-db61-4f51-80b5-dfd17e2ba6b1 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:21:13.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9724" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":243,"skipped":4088,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Networking
... skipping 35 lines ...
Oct 23 11:21:37.272: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 23 11:21:38.578: INFO: Found all 1 expected endpoints: [netserver-2]
[AfterEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:21:38.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1629" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":244,"skipped":4096,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] IngressClass API 
   should support creating IngressClass API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] IngressClass API
... skipping 21 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] IngressClass API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:21:39.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingressclass-8304" for this suite.
•{"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":306,"completed":245,"skipped":4111,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Oct 23 11:21:39.579: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward api env vars
Oct 23 11:21:39.806: INFO: Waiting up to 5m0s for pod "downward-api-04517214-9cb9-43ab-8dcd-ab830042c8dd" in namespace "downward-api-8797" to be "Succeeded or Failed"
Oct 23 11:21:39.843: INFO: Pod "downward-api-04517214-9cb9-43ab-8dcd-ab830042c8dd": Phase="Pending", Reason="", readiness=false. Elapsed: 36.858563ms
Oct 23 11:21:41.901: INFO: Pod "downward-api-04517214-9cb9-43ab-8dcd-ab830042c8dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.094885018s
STEP: Saw pod success
Oct 23 11:21:41.901: INFO: Pod "downward-api-04517214-9cb9-43ab-8dcd-ab830042c8dd" satisfied condition "Succeeded or Failed"
Oct 23 11:21:41.954: INFO: Trying to get logs from node bootstrap-e2e-minion-group-xbjm pod downward-api-04517214-9cb9-43ab-8dcd-ab830042c8dd container dapi-container: <nil>
STEP: delete the pod
Oct 23 11:21:42.089: INFO: Waiting for pod downward-api-04517214-9cb9-43ab-8dcd-ab830042c8dd to disappear
Oct 23 11:21:42.163: INFO: Pod downward-api-04517214-9cb9-43ab-8dcd-ab830042c8dd no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:21:42.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8797" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":306,"completed":246,"skipped":4121,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-7cb18af1-b0fb-4263-8ef5-4bbb27d1162b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:21:49.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8775" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":306,"completed":247,"skipped":4142,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 39 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:22:00.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8230" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":306,"completed":248,"skipped":4155,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] ReplicationController
... skipping 11 lines ...
Oct 23 11:22:01.065: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:22:02.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5685" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":306,"completed":249,"skipped":4170,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Oct 23 11:22:02.725: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test override command
Oct 23 11:22:03.386: INFO: Waiting up to 5m0s for pod "client-containers-39adf6f2-3fce-4c51-9d88-f7fb15ae5986" in namespace "containers-4947" to be "Succeeded or Failed"
Oct 23 11:22:03.552: INFO: Pod "client-containers-39adf6f2-3fce-4c51-9d88-f7fb15ae5986": Phase="Pending", Reason="", readiness=false. Elapsed: 165.751241ms
Oct 23 11:22:05.590: INFO: Pod "client-containers-39adf6f2-3fce-4c51-9d88-f7fb15ae5986": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.203535936s
STEP: Saw pod success
Oct 23 11:22:05.590: INFO: Pod "client-containers-39adf6f2-3fce-4c51-9d88-f7fb15ae5986" satisfied condition "Succeeded or Failed"
Oct 23 11:22:05.627: INFO: Trying to get logs from node bootstrap-e2e-minion-group-xbjm pod client-containers-39adf6f2-3fce-4c51-9d88-f7fb15ae5986 container agnhost-container: <nil>
STEP: delete the pod
Oct 23 11:22:05.715: INFO: Waiting for pod client-containers-39adf6f2-3fce-4c51-9d88-f7fb15ae5986 to disappear
Oct 23 11:22:05.754: INFO: Pod client-containers-39adf6f2-3fce-4c51-9d88-f7fb15ae5986 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:22:05.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4947" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":306,"completed":250,"skipped":4179,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 23 11:22:06.075: INFO: Waiting up to 5m0s for pod "busybox-user-65534-1a73b00b-12f2-4c97-8d72-17d12750c8a5" in namespace "security-context-test-8888" to be "Succeeded or Failed"
Oct 23 11:22:06.123: INFO: Pod "busybox-user-65534-1a73b00b-12f2-4c97-8d72-17d12750c8a5": Phase="Pending", Reason="", readiness=false. Elapsed: 47.219424ms
Oct 23 11:22:08.162: INFO: Pod "busybox-user-65534-1a73b00b-12f2-4c97-8d72-17d12750c8a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.086687356s
Oct 23 11:22:08.162: INFO: Pod "busybox-user-65534-1a73b00b-12f2-4c97-8d72-17d12750c8a5" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:22:08.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8888" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":251,"skipped":4195,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 11 lines ...
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:22:08.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9715" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":306,"completed":252,"skipped":4205,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-instrumentation] Events API 
  should delete a collection of events [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-instrumentation] Events API
... skipping 12 lines ...
Oct 23 11:22:09.336: INFO: requesting DeleteCollection of events
STEP: check that the list of events matches the requested quantity
[AfterEach] [sig-instrumentation] Events API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:22:09.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9766" for this suite.
•{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":306,"completed":253,"skipped":4222,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Oct 23 11:22:09.944: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test env composition
Oct 23 11:22:10.360: INFO: Waiting up to 5m0s for pod "var-expansion-5c19f787-9b8f-407a-aa04-e86a3f1cfc23" in namespace "var-expansion-6579" to be "Succeeded or Failed"
Oct 23 11:22:10.457: INFO: Pod "var-expansion-5c19f787-9b8f-407a-aa04-e86a3f1cfc23": Phase="Pending", Reason="", readiness=false. Elapsed: 97.064249ms
Oct 23 11:22:12.495: INFO: Pod "var-expansion-5c19f787-9b8f-407a-aa04-e86a3f1cfc23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.135276164s
STEP: Saw pod success
Oct 23 11:22:12.495: INFO: Pod "var-expansion-5c19f787-9b8f-407a-aa04-e86a3f1cfc23" satisfied condition "Succeeded or Failed"
Oct 23 11:22:12.533: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod var-expansion-5c19f787-9b8f-407a-aa04-e86a3f1cfc23 container dapi-container: <nil>
STEP: delete the pod
Oct 23 11:22:12.629: INFO: Waiting for pod var-expansion-5c19f787-9b8f-407a-aa04-e86a3f1cfc23 to disappear
Oct 23 11:22:12.667: INFO: Pod var-expansion-5c19f787-9b8f-407a-aa04-e86a3f1cfc23 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:22:12.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6579" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":306,"completed":254,"skipped":4278,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating pod pod-subpath-test-configmap-n9l6
STEP: Creating a pod to test atomic-volume-subpath
Oct 23 11:22:13.060: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-n9l6" in namespace "subpath-1396" to be "Succeeded or Failed"
Oct 23 11:22:13.097: INFO: Pod "pod-subpath-test-configmap-n9l6": Phase="Pending", Reason="", readiness=false. Elapsed: 37.109969ms
Oct 23 11:22:15.137: INFO: Pod "pod-subpath-test-configmap-n9l6": Phase="Running", Reason="", readiness=true. Elapsed: 2.076751683s
Oct 23 11:22:17.178: INFO: Pod "pod-subpath-test-configmap-n9l6": Phase="Running", Reason="", readiness=true. Elapsed: 4.118244449s
Oct 23 11:22:19.216: INFO: Pod "pod-subpath-test-configmap-n9l6": Phase="Running", Reason="", readiness=true. Elapsed: 6.156126141s
Oct 23 11:22:21.267: INFO: Pod "pod-subpath-test-configmap-n9l6": Phase="Running", Reason="", readiness=true. Elapsed: 8.207384114s
Oct 23 11:22:23.321: INFO: Pod "pod-subpath-test-configmap-n9l6": Phase="Running", Reason="", readiness=true. Elapsed: 10.261422511s
Oct 23 11:22:25.360: INFO: Pod "pod-subpath-test-configmap-n9l6": Phase="Running", Reason="", readiness=true. Elapsed: 12.300158889s
Oct 23 11:22:27.398: INFO: Pod "pod-subpath-test-configmap-n9l6": Phase="Running", Reason="", readiness=true. Elapsed: 14.338618837s
Oct 23 11:22:29.456: INFO: Pod "pod-subpath-test-configmap-n9l6": Phase="Running", Reason="", readiness=true. Elapsed: 16.396471407s
Oct 23 11:22:31.495: INFO: Pod "pod-subpath-test-configmap-n9l6": Phase="Running", Reason="", readiness=true. Elapsed: 18.434855208s
Oct 23 11:22:33.626: INFO: Pod "pod-subpath-test-configmap-n9l6": Phase="Running", Reason="", readiness=true. Elapsed: 20.56658455s
Oct 23 11:22:35.696: INFO: Pod "pod-subpath-test-configmap-n9l6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.63644177s
STEP: Saw pod success
Oct 23 11:22:35.696: INFO: Pod "pod-subpath-test-configmap-n9l6" satisfied condition "Succeeded or Failed"
Oct 23 11:22:35.783: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-subpath-test-configmap-n9l6 container test-container-subpath-configmap-n9l6: <nil>
STEP: delete the pod
Oct 23 11:22:36.014: INFO: Waiting for pod pod-subpath-test-configmap-n9l6 to disappear
Oct 23 11:22:36.069: INFO: Pod pod-subpath-test-configmap-n9l6 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-n9l6
Oct 23 11:22:36.069: INFO: Deleting pod "pod-subpath-test-configmap-n9l6" in namespace "subpath-1396"
[AfterEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:22:36.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1396" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":306,"completed":255,"skipped":4296,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath 
  runs ReplicaSets to verify preemption running path [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 32 lines ...
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:25:00.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-preemption-1691" for this suite.
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":306,"completed":256,"skipped":4300,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
Oct 23 11:25:17.781: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 23 11:25:21.846: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:25:40.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6913" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":306,"completed":257,"skipped":4302,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl server-side dry-run 
  should check if kubectl can dry-run update Pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 22 lines ...
Oct 23 11:25:51.529: INFO: stderr: ""
Oct 23 11:25:51.529: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:25:51.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2007" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":306,"completed":258,"skipped":4317,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-map-3ca65a4c-1f9b-416e-95d8-ae942afa47e8
STEP: Creating a pod to test consume configMaps
Oct 23 11:25:51.892: INFO: Waiting up to 5m0s for pod "pod-configmaps-656ab7b3-8cfb-4141-a310-5aa7b075930d" in namespace "configmap-8567" to be "Succeeded or Failed"
Oct 23 11:25:51.939: INFO: Pod "pod-configmaps-656ab7b3-8cfb-4141-a310-5aa7b075930d": Phase="Pending", Reason="", readiness=false. Elapsed: 47.332389ms
Oct 23 11:25:53.980: INFO: Pod "pod-configmaps-656ab7b3-8cfb-4141-a310-5aa7b075930d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.088401061s
STEP: Saw pod success
Oct 23 11:25:53.980: INFO: Pod "pod-configmaps-656ab7b3-8cfb-4141-a310-5aa7b075930d" satisfied condition "Succeeded or Failed"
Oct 23 11:25:54.020: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-configmaps-656ab7b3-8cfb-4141-a310-5aa7b075930d container configmap-volume-test: <nil>
STEP: delete the pod
Oct 23 11:25:54.319: INFO: Waiting for pod pod-configmaps-656ab7b3-8cfb-4141-a310-5aa7b075930d to disappear
Oct 23 11:25:54.357: INFO: Pod pod-configmaps-656ab7b3-8cfb-4141-a310-5aa7b075930d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:25:54.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8567" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":306,"completed":259,"skipped":4321,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name projected-secret-test-d8237610-a535-4b75-a70c-71fe1d8d221a
STEP: Creating a pod to test consume secrets
Oct 23 11:25:54.804: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-58a6af4a-e022-48f4-b14e-df8a56a78bc6" in namespace "projected-5482" to be "Succeeded or Failed"
Oct 23 11:25:54.841: INFO: Pod "pod-projected-secrets-58a6af4a-e022-48f4-b14e-df8a56a78bc6": Phase="Pending", Reason="", readiness=false. Elapsed: 36.896023ms
Oct 23 11:25:56.896: INFO: Pod "pod-projected-secrets-58a6af4a-e022-48f4-b14e-df8a56a78bc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.091467954s
STEP: Saw pod success
Oct 23 11:25:56.896: INFO: Pod "pod-projected-secrets-58a6af4a-e022-48f4-b14e-df8a56a78bc6" satisfied condition "Succeeded or Failed"
Oct 23 11:25:56.951: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-projected-secrets-58a6af4a-e022-48f4-b14e-df8a56a78bc6 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 23 11:25:57.172: INFO: Waiting for pod pod-projected-secrets-58a6af4a-e022-48f4-b14e-df8a56a78bc6 to disappear
Oct 23 11:25:57.231: INFO: Pod pod-projected-secrets-58a6af4a-e022-48f4-b14e-df8a56a78bc6 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:25:57.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5482" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":306,"completed":260,"skipped":4330,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name projected-secret-test-map-ae7c4c7e-929e-488f-a8c6-c50d87422d32
STEP: Creating a pod to test consume secrets
Oct 23 11:25:57.895: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c91e58ff-6c2a-4395-96e2-b6187cae5520" in namespace "projected-6012" to be "Succeeded or Failed"
Oct 23 11:25:57.943: INFO: Pod "pod-projected-secrets-c91e58ff-6c2a-4395-96e2-b6187cae5520": Phase="Pending", Reason="", readiness=false. Elapsed: 47.372142ms
Oct 23 11:25:59.987: INFO: Pod "pod-projected-secrets-c91e58ff-6c2a-4395-96e2-b6187cae5520": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.092304168s
STEP: Saw pod success
Oct 23 11:25:59.987: INFO: Pod "pod-projected-secrets-c91e58ff-6c2a-4395-96e2-b6187cae5520" satisfied condition "Succeeded or Failed"
Oct 23 11:26:00.037: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-projected-secrets-c91e58ff-6c2a-4395-96e2-b6187cae5520 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 23 11:26:00.461: INFO: Waiting for pod pod-projected-secrets-c91e58ff-6c2a-4395-96e2-b6187cae5520 to disappear
Oct 23 11:26:00.502: INFO: Pod pod-projected-secrets-c91e58ff-6c2a-4395-96e2-b6187cae5520 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:26:00.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6012" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":261,"skipped":4335,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:26:23.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-464" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":306,"completed":262,"skipped":4339,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Oct 23 11:26:27.933: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:28.030: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:28.244: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:28.367: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:28.423: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:28.494: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:28.682: INFO: Lookups using dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6589.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6589.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local jessie_udp@dns-test-service-2.dns-6589.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6589.svc.cluster.local]

Oct 23 11:26:33.759: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:33.905: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:34.010: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:34.104: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:34.587: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:34.660: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:34.759: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:34.882: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:35.041: INFO: Lookups using dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6589.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6589.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local jessie_udp@dns-test-service-2.dns-6589.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6589.svc.cluster.local]

Oct 23 11:26:38.722: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:38.766: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:38.806: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:38.847: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:38.966: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:39.007: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:39.047: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:39.089: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:39.169: INFO: Lookups using dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6589.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6589.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local jessie_udp@dns-test-service-2.dns-6589.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6589.svc.cluster.local]

Oct 23 11:26:43.722: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:43.764: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:43.804: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:43.843: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:43.982: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:44.023: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:44.062: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:44.101: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:44.182: INFO: Lookups using dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6589.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6589.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local jessie_udp@dns-test-service-2.dns-6589.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6589.svc.cluster.local]

Oct 23 11:26:48.722: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:48.765: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:48.805: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:48.844: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:48.962: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:49.002: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:49.041: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:49.080: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:49.159: INFO: Lookups using dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6589.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6589.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local jessie_udp@dns-test-service-2.dns-6589.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6589.svc.cluster.local]

Oct 23 11:26:53.737: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:53.818: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:53.885: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:53.948: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:54.187: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:54.230: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:54.270: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:54.317: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6589.svc.cluster.local from pod dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861: the server could not find the requested resource (get pods dns-test-e019987f-acdf-46fa-9e9b-974731515861)
Oct 23 11:26:54.402: INFO: Lookups using dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6589.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6589.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6589.svc.cluster.local jessie_udp@dns-test-service-2.dns-6589.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6589.svc.cluster.local]

Oct 23 11:26:59.156: INFO: DNS probes using dns-6589/dns-test-e019987f-acdf-46fa-9e9b-974731515861 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:26:59.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6589" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":306,"completed":263,"skipped":4349,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SS
------------------------------
[sig-network] Services 
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 11 lines ...
STEP: creating replication controller affinity-clusterip in namespace services-3305
I1023 11:26:59.908945  144144 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-3305, replica count: 3
I1023 11:27:03.009481  144144 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 23 11:27:03.085: INFO: Creating new exec pod
Oct 23 11:27:06.240: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-3305 exec execpod-affinity5mwqw -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Oct 23 11:27:07.797: INFO: rc: 1
Oct 23 11:27:07.797: INFO: Service reachability failing with error: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-3305 exec execpod-affinity5mwqw -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 23 11:27:08.797: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-3305 exec execpod-affinity5mwqw -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Oct 23 11:27:10.355: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n"
Oct 23 11:27:10.355: INFO: stdout: ""
Oct 23 11:27:10.356: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-3305 exec execpod-affinity5mwqw -- /bin/sh -x -c nc -zv -t -w 2 10.0.213.178 80'
... skipping 25 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:27:24.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3305" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":306,"completed":264,"skipped":4351,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] StatefulSet
... skipping 12 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-2869
STEP: Creating statefulset with conflicting port in namespace statefulset-2869
STEP: Waiting until pod test-pod will start running in namespace statefulset-2869
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2869
Oct 23 11:27:28.904: INFO: Observed stateful pod in namespace: statefulset-2869, name: ss-0, uid: 524da38d-26c5-40b4-9903-cf665a99a9e5, status phase: Pending. Waiting for statefulset controller to delete.
Oct 23 11:27:29.038: INFO: Observed stateful pod in namespace: statefulset-2869, name: ss-0, uid: 524da38d-26c5-40b4-9903-cf665a99a9e5, status phase: Failed. Waiting for statefulset controller to delete.
Oct 23 11:27:29.046: INFO: Observed stateful pod in namespace: statefulset-2869, name: ss-0, uid: 524da38d-26c5-40b4-9903-cf665a99a9e5, status phase: Failed. Waiting for statefulset controller to delete.
Oct 23 11:27:29.053: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2869
STEP: Removing pod with conflicting port in namespace statefulset-2869
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2869 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114
Oct 23 11:27:31.205: INFO: Deleting all statefulset in ns statefulset-2869
Oct 23 11:27:31.243: INFO: Scaling statefulset ss to 0
Oct 23 11:27:51.410: INFO: Waiting for statefulset status.replicas updated to 0
Oct 23 11:27:51.514: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:27:51.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2869" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":306,"completed":265,"skipped":4360,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:27:59.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-8523" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":306,"completed":266,"skipped":4369,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 11 lines ...
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:28:31.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5928" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":306,"completed":267,"skipped":4372,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name projected-secret-test-86e3d5d4-ffcb-402a-a66a-541aaf0d1893
STEP: Creating a pod to test consume secrets
Oct 23 11:28:32.531: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9568a2c8-910b-41ff-94cf-db97962b1016" in namespace "projected-5187" to be "Succeeded or Failed"
Oct 23 11:28:32.592: INFO: Pod "pod-projected-secrets-9568a2c8-910b-41ff-94cf-db97962b1016": Phase="Pending", Reason="", readiness=false. Elapsed: 60.670783ms
Oct 23 11:28:34.630: INFO: Pod "pod-projected-secrets-9568a2c8-910b-41ff-94cf-db97962b1016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.098547304s
STEP: Saw pod success
Oct 23 11:28:34.630: INFO: Pod "pod-projected-secrets-9568a2c8-910b-41ff-94cf-db97962b1016" satisfied condition "Succeeded or Failed"
Oct 23 11:28:34.669: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-projected-secrets-9568a2c8-910b-41ff-94cf-db97962b1016 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 23 11:28:34.801: INFO: Waiting for pod pod-projected-secrets-9568a2c8-910b-41ff-94cf-db97962b1016 to disappear
Oct 23 11:28:34.841: INFO: Pod pod-projected-secrets-9568a2c8-910b-41ff-94cf-db97962b1016 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:28:34.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5187" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":268,"skipped":4374,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}

------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-fd4e9ac4-39bb-4681-80d2-b55e659b0f7f
STEP: Creating a pod to test consume configMaps
Oct 23 11:28:35.200: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d71e6df1-a87d-4ae5-814d-2a6cbedbc58d" in namespace "projected-2713" to be "Succeeded or Failed"
Oct 23 11:28:35.492: INFO: Pod "pod-projected-configmaps-d71e6df1-a87d-4ae5-814d-2a6cbedbc58d": Phase="Pending", Reason="", readiness=false. Elapsed: 292.521835ms
Oct 23 11:28:37.530: INFO: Pod "pod-projected-configmaps-d71e6df1-a87d-4ae5-814d-2a6cbedbc58d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.330110754s
STEP: Saw pod success
Oct 23 11:28:37.530: INFO: Pod "pod-projected-configmaps-d71e6df1-a87d-4ae5-814d-2a6cbedbc58d" satisfied condition "Succeeded or Failed"
Oct 23 11:28:37.567: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-projected-configmaps-d71e6df1-a87d-4ae5-814d-2a6cbedbc58d container agnhost-container: <nil>
STEP: delete the pod
Oct 23 11:28:37.656: INFO: Waiting for pod pod-projected-configmaps-d71e6df1-a87d-4ae5-814d-2a6cbedbc58d to disappear
Oct 23 11:28:37.695: INFO: Pod pod-projected-configmaps-d71e6df1-a87d-4ae5-814d-2a6cbedbc58d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:28:37.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2713" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":306,"completed":269,"skipped":4374,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}

------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 23 11:28:37.776: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0666 on node default medium
Oct 23 11:28:38.071: INFO: Waiting up to 5m0s for pod "pod-336b0e8e-80fc-4f48-b42b-b6448e843601" in namespace "emptydir-5934" to be "Succeeded or Failed"
Oct 23 11:28:38.141: INFO: Pod "pod-336b0e8e-80fc-4f48-b42b-b6448e843601": Phase="Pending", Reason="", readiness=false. Elapsed: 70.036367ms
Oct 23 11:28:40.243: INFO: Pod "pod-336b0e8e-80fc-4f48-b42b-b6448e843601": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.172235469s
STEP: Saw pod success
Oct 23 11:28:40.243: INFO: Pod "pod-336b0e8e-80fc-4f48-b42b-b6448e843601" satisfied condition "Succeeded or Failed"
Oct 23 11:28:40.281: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-336b0e8e-80fc-4f48-b42b-b6448e843601 container test-container: <nil>
STEP: delete the pod
Oct 23 11:28:40.378: INFO: Waiting for pod pod-336b0e8e-80fc-4f48-b42b-b6448e843601 to disappear
Oct 23 11:28:40.416: INFO: Pod pod-336b0e8e-80fc-4f48-b42b-b6448e843601 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:28:40.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5934" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":270,"skipped":4374,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 33 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:28:53.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9063" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":306,"completed":271,"skipped":4390,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}

------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 23 11:28:53.266: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 23 11:28:55.575: INFO: Deleting pod "var-expansion-bb358f81-0243-4895-afd9-d77607a650bc" in namespace "var-expansion-5842"
Oct 23 11:28:55.619: INFO: Wait up to 5m0s for pod "var-expansion-bb358f81-0243-4895-afd9-d77607a650bc" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:29:51.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5842" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":306,"completed":272,"skipped":4390,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-3c2e995b-401a-4b82-b994-f83f233d0b90
STEP: Creating a pod to test consume configMaps
Oct 23 11:29:52.048: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-97d22f21-1f1e-4a7c-9abf-64c3e196abea" in namespace "projected-2571" to be "Succeeded or Failed"
Oct 23 11:29:52.085: INFO: Pod "pod-projected-configmaps-97d22f21-1f1e-4a7c-9abf-64c3e196abea": Phase="Pending", Reason="", readiness=false. Elapsed: 37.386453ms
Oct 23 11:29:54.124: INFO: Pod "pod-projected-configmaps-97d22f21-1f1e-4a7c-9abf-64c3e196abea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.076599457s
STEP: Saw pod success
Oct 23 11:29:54.124: INFO: Pod "pod-projected-configmaps-97d22f21-1f1e-4a7c-9abf-64c3e196abea" satisfied condition "Succeeded or Failed"
Oct 23 11:29:54.263: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-projected-configmaps-97d22f21-1f1e-4a7c-9abf-64c3e196abea container agnhost-container: <nil>
STEP: delete the pod
Oct 23 11:29:54.352: INFO: Waiting for pod pod-projected-configmaps-97d22f21-1f1e-4a7c-9abf-64c3e196abea to disappear
Oct 23 11:29:54.390: INFO: Pod pod-projected-configmaps-97d22f21-1f1e-4a7c-9abf-64c3e196abea no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:29:54.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2571" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":273,"skipped":4411,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
S
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Aggregator
... skipping 18 lines ...
[AfterEach] [sig-api-machinery] Aggregator
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:30:16.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-1952" for this suite.
•{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":306,"completed":274,"skipped":4412,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 23 11:30:17.038: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cac45770-7771-4642-a2d6-5feffe177c29" in namespace "projected-3535" to be "Succeeded or Failed"
Oct 23 11:30:17.213: INFO: Pod "downwardapi-volume-cac45770-7771-4642-a2d6-5feffe177c29": Phase="Pending", Reason="", readiness=false. Elapsed: 175.472254ms
Oct 23 11:30:19.250: INFO: Pod "downwardapi-volume-cac45770-7771-4642-a2d6-5feffe177c29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21256791s
Oct 23 11:30:21.288: INFO: Pod "downwardapi-volume-cac45770-7771-4642-a2d6-5feffe177c29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.24977457s
STEP: Saw pod success
Oct 23 11:30:21.288: INFO: Pod "downwardapi-volume-cac45770-7771-4642-a2d6-5feffe177c29" satisfied condition "Succeeded or Failed"
Oct 23 11:30:21.324: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod downwardapi-volume-cac45770-7771-4642-a2d6-5feffe177c29 container client-container: <nil>
STEP: delete the pod
Oct 23 11:30:21.411: INFO: Waiting for pod downwardapi-volume-cac45770-7771-4642-a2d6-5feffe177c29 to disappear
Oct 23 11:30:21.447: INFO: Pod downwardapi-volume-cac45770-7771-4642-a2d6-5feffe177c29 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:30:21.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3535" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":306,"completed":275,"skipped":4415,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 9 lines ...
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:30:21.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2214" for this suite.
STEP: Destroying namespace "nspatchtest-7c48a801-2e5b-44a4-b66c-1a80102fb078-5589" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":306,"completed":276,"skipped":4421,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 23 11:30:22.266: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fba49626-72f3-414b-9b23-fd74d62e91a7" in namespace "downward-api-5434" to be "Succeeded or Failed"
Oct 23 11:30:22.333: INFO: Pod "downwardapi-volume-fba49626-72f3-414b-9b23-fd74d62e91a7": Phase="Pending", Reason="", readiness=false. Elapsed: 67.52963ms
Oct 23 11:30:24.370: INFO: Pod "downwardapi-volume-fba49626-72f3-414b-9b23-fd74d62e91a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.104497733s
STEP: Saw pod success
Oct 23 11:30:24.370: INFO: Pod "downwardapi-volume-fba49626-72f3-414b-9b23-fd74d62e91a7" satisfied condition "Succeeded or Failed"
Oct 23 11:30:24.408: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod downwardapi-volume-fba49626-72f3-414b-9b23-fd74d62e91a7 container client-container: <nil>
STEP: delete the pod
Oct 23 11:30:24.790: INFO: Waiting for pod downwardapi-volume-fba49626-72f3-414b-9b23-fd74d62e91a7 to disappear
Oct 23 11:30:24.826: INFO: Pod downwardapi-volume-fba49626-72f3-414b-9b23-fd74d62e91a7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:30:24.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5434" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":306,"completed":277,"skipped":4443,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] ReplicaSet
... skipping 11 lines ...
Oct 23 11:30:27.446: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:30:27.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9406" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":306,"completed":278,"skipped":4446,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 11 lines ...
STEP: creating replication controller affinity-nodeport-transition in namespace services-8539
I1023 11:30:27.962236  144144 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-8539, replica count: 3
I1023 11:30:31.012786  144144 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 23 11:30:31.124: INFO: Creating new exec pod
Oct 23 11:30:34.312: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-8539 exec execpod-affinityvhxb8 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
Oct 23 11:30:35.946: INFO: rc: 1
Oct 23 11:30:35.946: INFO: Service reachability failing with error: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-8539 exec execpod-affinityvhxb8 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-nodeport-transition 80
nc: connect to affinity-nodeport-transition port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 23 11:30:36.947: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-8539 exec execpod-affinityvhxb8 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
Oct 23 11:30:38.636: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n"
Oct 23 11:30:38.636: INFO: stdout: ""
Oct 23 11:30:38.636: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-8539 exec execpod-affinityvhxb8 -- /bin/sh -x -c nc -zv -t -w 2 10.0.119.125 80'
... skipping 75 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:31:24.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8539" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":306,"completed":279,"skipped":4455,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 40 lines ...
Oct 23 11:31:38.317: INFO: Deleting pod "simpletest-rc-to-be-deleted-kd2nt" in namespace "gc-5785"
Oct 23 11:31:38.376: INFO: Deleting pod "simpletest-rc-to-be-deleted-n5727" in namespace "gc-5785"
[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:31:38.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5785" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":306,"completed":280,"skipped":4480,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 66 lines ...
Oct 23 11:31:47.512: INFO: stderr: ""
Oct 23 11:31:47.512: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:31:47.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7458" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":306,"completed":281,"skipped":4486,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Oct 23 11:31:48.021: INFO: stderr: ""
Oct 23 11:31:48.021: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\nbatch/v2alpha1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncloud.google.com/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nmetrics.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1alpha1\nscheduling.k8s.io/v1beta1\nsnapshot.storage.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:31:48.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3516" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":306,"completed":282,"skipped":4494,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}

------------------------------
[sig-cli] Kubectl client Kubectl diff 
  should check if kubectl diff finds a difference for Deployments [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 17 lines ...
Oct 23 11:31:49.534: INFO: stderr: ""
Oct 23 11:31:49.534: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:31:49.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2678" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":306,"completed":283,"skipped":4494,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-map-ae89d973-61d0-4258-b262-8570b51d1945
STEP: Creating a pod to test consume configMaps
Oct 23 11:31:51.055: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-707ac729-ea9d-495e-b783-2112cc729e6d" in namespace "projected-4432" to be "Succeeded or Failed"
Oct 23 11:31:51.145: INFO: Pod "pod-projected-configmaps-707ac729-ea9d-495e-b783-2112cc729e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 89.587251ms
Oct 23 11:31:53.185: INFO: Pod "pod-projected-configmaps-707ac729-ea9d-495e-b783-2112cc729e6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.130225962s
STEP: Saw pod success
Oct 23 11:31:53.185: INFO: Pod "pod-projected-configmaps-707ac729-ea9d-495e-b783-2112cc729e6d" satisfied condition "Succeeded or Failed"
Oct 23 11:31:53.222: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-projected-configmaps-707ac729-ea9d-495e-b783-2112cc729e6d container agnhost-container: <nil>
STEP: delete the pod
Oct 23 11:31:53.339: INFO: Waiting for pod pod-projected-configmaps-707ac729-ea9d-495e-b783-2112cc729e6d to disappear
Oct 23 11:31:53.377: INFO: Pod pod-projected-configmaps-707ac729-ea9d-495e-b783-2112cc729e6d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:31:53.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4432" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":306,"completed":284,"skipped":4495,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-map-fd79fc7f-fcd5-4f5e-8dfc-466722b7f0e2
STEP: Creating a pod to test consume configMaps
Oct 23 11:31:53.989: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-88379fa6-3bf4-4dba-996a-77b5194a7475" in namespace "projected-3542" to be "Succeeded or Failed"
Oct 23 11:31:54.025: INFO: Pod "pod-projected-configmaps-88379fa6-3bf4-4dba-996a-77b5194a7475": Phase="Pending", Reason="", readiness=false. Elapsed: 36.334465ms
Oct 23 11:31:56.062: INFO: Pod "pod-projected-configmaps-88379fa6-3bf4-4dba-996a-77b5194a7475": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.073605117s
STEP: Saw pod success
Oct 23 11:31:56.062: INFO: Pod "pod-projected-configmaps-88379fa6-3bf4-4dba-996a-77b5194a7475" satisfied condition "Succeeded or Failed"
Oct 23 11:31:56.102: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-projected-configmaps-88379fa6-3bf4-4dba-996a-77b5194a7475 container agnhost-container: <nil>
STEP: delete the pod
Oct 23 11:31:56.187: INFO: Waiting for pod pod-projected-configmaps-88379fa6-3bf4-4dba-996a-77b5194a7475 to disappear
Oct 23 11:31:56.222: INFO: Pod pod-projected-configmaps-88379fa6-3bf4-4dba-996a-77b5194a7475 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:31:56.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3542" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":306,"completed":285,"skipped":4502,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 50 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:31:58.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3069" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":306,"completed":286,"skipped":4504,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating pod pod-subpath-test-downwardapi-dddl
STEP: Creating a pod to test atomic-volume-subpath
Oct 23 11:31:58.626: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-dddl" in namespace "subpath-9218" to be "Succeeded or Failed"
Oct 23 11:31:58.671: INFO: Pod "pod-subpath-test-downwardapi-dddl": Phase="Pending", Reason="", readiness=false. Elapsed: 45.152923ms
Oct 23 11:32:00.708: INFO: Pod "pod-subpath-test-downwardapi-dddl": Phase="Running", Reason="", readiness=true. Elapsed: 2.081829639s
Oct 23 11:32:02.745: INFO: Pod "pod-subpath-test-downwardapi-dddl": Phase="Running", Reason="", readiness=true. Elapsed: 4.118476664s
Oct 23 11:32:04.819: INFO: Pod "pod-subpath-test-downwardapi-dddl": Phase="Running", Reason="", readiness=true. Elapsed: 6.19235053s
Oct 23 11:32:06.866: INFO: Pod "pod-subpath-test-downwardapi-dddl": Phase="Running", Reason="", readiness=true. Elapsed: 8.239286222s
Oct 23 11:32:08.902: INFO: Pod "pod-subpath-test-downwardapi-dddl": Phase="Running", Reason="", readiness=true. Elapsed: 10.275826423s
Oct 23 11:32:10.940: INFO: Pod "pod-subpath-test-downwardapi-dddl": Phase="Running", Reason="", readiness=true. Elapsed: 12.313653531s
Oct 23 11:32:12.979: INFO: Pod "pod-subpath-test-downwardapi-dddl": Phase="Running", Reason="", readiness=true. Elapsed: 14.352652914s
Oct 23 11:32:15.016: INFO: Pod "pod-subpath-test-downwardapi-dddl": Phase="Running", Reason="", readiness=true. Elapsed: 16.390158507s
Oct 23 11:32:17.059: INFO: Pod "pod-subpath-test-downwardapi-dddl": Phase="Running", Reason="", readiness=true. Elapsed: 18.432223446s
Oct 23 11:32:19.096: INFO: Pod "pod-subpath-test-downwardapi-dddl": Phase="Running", Reason="", readiness=true. Elapsed: 20.469562462s
Oct 23 11:32:21.134: INFO: Pod "pod-subpath-test-downwardapi-dddl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.507852742s
STEP: Saw pod success
Oct 23 11:32:21.134: INFO: Pod "pod-subpath-test-downwardapi-dddl" satisfied condition "Succeeded or Failed"
Oct 23 11:32:21.171: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-subpath-test-downwardapi-dddl container test-container-subpath-downwardapi-dddl: <nil>
STEP: delete the pod
Oct 23 11:32:21.260: INFO: Waiting for pod pod-subpath-test-downwardapi-dddl to disappear
Oct 23 11:32:21.296: INFO: Pod pod-subpath-test-downwardapi-dddl no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-dddl
Oct 23 11:32:21.296: INFO: Deleting pod "pod-subpath-test-downwardapi-dddl" in namespace "subpath-9218"
[AfterEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:32:21.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9218" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":306,"completed":287,"skipped":4507,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] StatefulSet
... skipping 46 lines ...
Oct 23 11:34:24.261: INFO: Waiting for statefulset status.replicas updated to 0
Oct 23 11:34:24.298: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:34:24.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-646" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":306,"completed":288,"skipped":4536,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}

------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Pods
... skipping 11 lines ...
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Oct 23 11:34:27.454: INFO: Successfully updated pod "pod-update-activedeadlineseconds-4348b377-cf52-4ebe-95a2-db63e443fa9d"
Oct 23 11:34:27.454: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-4348b377-cf52-4ebe-95a2-db63e443fa9d" in namespace "pods-7555" to be "terminated due to deadline exceeded"
Oct 23 11:34:27.490: INFO: Pod "pod-update-activedeadlineseconds-4348b377-cf52-4ebe-95a2-db63e443fa9d": Phase="Running", Reason="", readiness=true. Elapsed: 36.000561ms
Oct 23 11:34:29.599: INFO: Pod "pod-update-activedeadlineseconds-4348b377-cf52-4ebe-95a2-db63e443fa9d": Phase="Running", Reason="", readiness=true. Elapsed: 2.145356332s
Oct 23 11:34:31.637: INFO: Pod "pod-update-activedeadlineseconds-4348b377-cf52-4ebe-95a2-db63e443fa9d": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.183085467s
Oct 23 11:34:31.637: INFO: Pod "pod-update-activedeadlineseconds-4348b377-cf52-4ebe-95a2-db63e443fa9d" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:34:31.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7555" for this suite.
•{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":306,"completed":289,"skipped":4536,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Deployment
... skipping 103 lines ...
Oct 23 11:34:39.961: INFO: Pod "webserver-deployment-dd94f59b7-zp48v" is not available:
&Pod{ObjectMeta:{webserver-deployment-dd94f59b7-zp48v webserver-deployment-dd94f59b7- deployment-977  e33f1540-5e16-4b8b-be1a-01a2cbaf9027 24536 0 2020-10-23 11:34:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 00bdd5d9-00a1-4914-835d-a51ba3ecdf69 0xc005286bd0 0xc005286bd1}] []  [{kube-controller-manager Update v1 2020-10-23 11:34:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"00bdd5d9-00a1-4914-835d-a51ba3ecdf69\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lbjrm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lbjrm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lbjrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-0324,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-23 11:34:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:34:39.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-977" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":306,"completed":290,"skipped":4594,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Networking
... skipping 38 lines ...
Oct 23 11:35:10.023: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 23 11:35:10.296: INFO: Found all 1 expected endpoints: [netserver-2]
[AfterEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:35:10.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2649" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":291,"skipped":4595,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 23 11:35:10.378: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0644 on node default medium
Oct 23 11:35:10.605: INFO: Waiting up to 5m0s for pod "pod-bdb4c9b9-d38e-48d3-a40e-5e97be919b39" in namespace "emptydir-7772" to be "Succeeded or Failed"
Oct 23 11:35:10.661: INFO: Pod "pod-bdb4c9b9-d38e-48d3-a40e-5e97be919b39": Phase="Pending", Reason="", readiness=false. Elapsed: 56.170478ms
Oct 23 11:35:12.701: INFO: Pod "pod-bdb4c9b9-d38e-48d3-a40e-5e97be919b39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.096634714s
STEP: Saw pod success
Oct 23 11:35:12.701: INFO: Pod "pod-bdb4c9b9-d38e-48d3-a40e-5e97be919b39" satisfied condition "Succeeded or Failed"
Oct 23 11:35:12.746: INFO: Trying to get logs from node bootstrap-e2e-minion-group-xbjm pod pod-bdb4c9b9-d38e-48d3-a40e-5e97be919b39 container test-container: <nil>
STEP: delete the pod
Oct 23 11:35:12.896: INFO: Waiting for pod pod-bdb4c9b9-d38e-48d3-a40e-5e97be919b39 to disappear
Oct 23 11:35:12.938: INFO: Pod pod-bdb4c9b9-d38e-48d3-a40e-5e97be919b39 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:35:12.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7772" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":292,"skipped":4613,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Oct 23 11:35:26.163: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:26.221: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:27.194: INFO: Unable to read jessie_udp@dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:27.276: INFO: Unable to read jessie_tcp@dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:27.501: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:27.675: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:28.262: INFO: Lookups using dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7 failed for: [wheezy_udp@dns-test-service.dns-4570.svc.cluster.local wheezy_tcp@dns-test-service.dns-4570.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local jessie_udp@dns-test-service.dns-4570.svc.cluster.local jessie_tcp@dns-test-service.dns-4570.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local]

Oct 23 11:35:33.329: INFO: Unable to read wheezy_udp@dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:33.383: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:33.492: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:33.565: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:33.999: INFO: Unable to read jessie_udp@dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:34.061: INFO: Unable to read jessie_tcp@dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:34.112: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:34.165: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:34.572: INFO: Lookups using dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7 failed for: [wheezy_udp@dns-test-service.dns-4570.svc.cluster.local wheezy_tcp@dns-test-service.dns-4570.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local jessie_udp@dns-test-service.dns-4570.svc.cluster.local jessie_tcp@dns-test-service.dns-4570.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local]

Oct 23 11:35:38.304: INFO: Unable to read wheezy_udp@dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:38.343: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:38.381: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:38.423: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:38.691: INFO: Unable to read jessie_udp@dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:38.728: INFO: Unable to read jessie_tcp@dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:38.766: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:38.804: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:39.039: INFO: Lookups using dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7 failed for: [wheezy_udp@dns-test-service.dns-4570.svc.cluster.local wheezy_tcp@dns-test-service.dns-4570.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local jessie_udp@dns-test-service.dns-4570.svc.cluster.local jessie_tcp@dns-test-service.dns-4570.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local]

Oct 23 11:35:43.388: INFO: Unable to read wheezy_udp@dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:43.430: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:43.471: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:43.513: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:43.803: INFO: Unable to read jessie_udp@dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:43.842: INFO: Unable to read jessie_tcp@dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:43.881: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:43.919: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local from pod dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7: the server could not find the requested resource (get pods dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7)
Oct 23 11:35:44.158: INFO: Lookups using dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7 failed for: [wheezy_udp@dns-test-service.dns-4570.svc.cluster.local wheezy_tcp@dns-test-service.dns-4570.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local jessie_udp@dns-test-service.dns-4570.svc.cluster.local jessie_tcp@dns-test-service.dns-4570.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4570.svc.cluster.local]

Oct 23 11:35:49.257: INFO: DNS probes using dns-4570/dns-test-602b3e81-d757-4ff6-a110-e51bc3b8cbf7 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:35:49.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4570" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":306,"completed":293,"skipped":4633,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:35:55.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3162" for this suite.
STEP: Destroying namespace "webhook-3162-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":306,"completed":294,"skipped":4641,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-f5b731c5-d090-4518-b1d8-c107e50e35d6
STEP: Creating a pod to test consume secrets
Oct 23 11:35:56.817: INFO: Waiting up to 5m0s for pod "pod-secrets-e6cbf64c-e2b7-4490-9ee2-27120ab1687e" in namespace "secrets-8931" to be "Succeeded or Failed"
Oct 23 11:35:56.853: INFO: Pod "pod-secrets-e6cbf64c-e2b7-4490-9ee2-27120ab1687e": Phase="Pending", Reason="", readiness=false. Elapsed: 35.786653ms
Oct 23 11:35:58.906: INFO: Pod "pod-secrets-e6cbf64c-e2b7-4490-9ee2-27120ab1687e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.088470716s
STEP: Saw pod success
Oct 23 11:35:58.906: INFO: Pod "pod-secrets-e6cbf64c-e2b7-4490-9ee2-27120ab1687e" satisfied condition "Succeeded or Failed"
Oct 23 11:35:58.978: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-secrets-e6cbf64c-e2b7-4490-9ee2-27120ab1687e container secret-volume-test: <nil>
STEP: delete the pod
Oct 23 11:35:59.315: INFO: Waiting for pod pod-secrets-e6cbf64c-e2b7-4490-9ee2-27120ab1687e to disappear
Oct 23 11:35:59.351: INFO: Pod pod-secrets-e6cbf64c-e2b7-4490-9ee2-27120ab1687e no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:35:59.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8931" for this suite.
STEP: Destroying namespace "secret-namespace-1203" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":306,"completed":295,"skipped":4651,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-map-5031ba27-6a6a-4f75-902f-bd96e97b955e
STEP: Creating a pod to test consume secrets
Oct 23 11:35:59.981: INFO: Waiting up to 5m0s for pod "pod-secrets-ed17f9b3-fdc9-42df-a82e-ffd57391e9bc" in namespace "secrets-737" to be "Succeeded or Failed"
Oct 23 11:36:00.039: INFO: Pod "pod-secrets-ed17f9b3-fdc9-42df-a82e-ffd57391e9bc": Phase="Pending", Reason="", readiness=false. Elapsed: 58.116501ms
Oct 23 11:36:02.082: INFO: Pod "pod-secrets-ed17f9b3-fdc9-42df-a82e-ffd57391e9bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.100755466s
STEP: Saw pod success
Oct 23 11:36:02.082: INFO: Pod "pod-secrets-ed17f9b3-fdc9-42df-a82e-ffd57391e9bc" satisfied condition "Succeeded or Failed"
Oct 23 11:36:02.121: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-secrets-ed17f9b3-fdc9-42df-a82e-ffd57391e9bc container secret-volume-test: <nil>
STEP: delete the pod
Oct 23 11:36:02.234: INFO: Waiting for pod pod-secrets-ed17f9b3-fdc9-42df-a82e-ffd57391e9bc to disappear
Oct 23 11:36:02.271: INFO: Pod pod-secrets-ed17f9b3-fdc9-42df-a82e-ffd57391e9bc no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:36:02.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-737" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":306,"completed":296,"skipped":4701,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-9c2351da-eab1-43f6-9555-32d00cd34dad
STEP: Creating a pod to test consume configMaps
Oct 23 11:36:02.642: INFO: Waiting up to 5m0s for pod "pod-configmaps-22cf44f2-60e4-4dcf-9cb2-62ff2024b4fa" in namespace "configmap-5736" to be "Succeeded or Failed"
Oct 23 11:36:03.092: INFO: Pod "pod-configmaps-22cf44f2-60e4-4dcf-9cb2-62ff2024b4fa": Phase="Pending", Reason="", readiness=false. Elapsed: 449.874914ms
Oct 23 11:36:05.160: INFO: Pod "pod-configmaps-22cf44f2-60e4-4dcf-9cb2-62ff2024b4fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.517506648s
STEP: Saw pod success
Oct 23 11:36:05.160: INFO: Pod "pod-configmaps-22cf44f2-60e4-4dcf-9cb2-62ff2024b4fa" satisfied condition "Succeeded or Failed"
Oct 23 11:36:05.203: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-configmaps-22cf44f2-60e4-4dcf-9cb2-62ff2024b4fa container configmap-volume-test: <nil>
STEP: delete the pod
Oct 23 11:36:05.506: INFO: Waiting for pod pod-configmaps-22cf44f2-60e4-4dcf-9cb2-62ff2024b4fa to disappear
Oct 23 11:36:05.545: INFO: Pod pod-configmaps-22cf44f2-60e4-4dcf-9cb2-62ff2024b4fa no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:36:05.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5736" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":306,"completed":297,"skipped":4708,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 17 lines ...
STEP: creating replication controller affinity-nodeport-timeout in namespace services-6331
I1023 11:36:09.746279  144144 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-6331, replica count: 3
I1023 11:36:12.846950  144144 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 23 11:36:12.971: INFO: Creating new exec pod
Oct 23 11:36:16.194: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-6331 exec execpod-affinitydd6j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80'
Oct 23 11:36:17.823: INFO: rc: 1
Oct 23 11:36:17.823: INFO: Service reachability failing with error: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-6331 exec execpod-affinitydd6j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-nodeport-timeout 80
nc: connect to affinity-nodeport-timeout port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 23 11:36:18.823: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-6331 exec execpod-affinitydd6j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80'
Oct 23 11:36:20.370: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n"
Oct 23 11:36:20.370: INFO: stdout: ""
Oct 23 11:36:20.371: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.199.23 --kubeconfig=/workspace/.kube/config --namespace=services-6331 exec execpod-affinitydd6j9 -- /bin/sh -x -c nc -zv -t -w 2 10.0.205.231 80'
... skipping 43 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:36:54.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6331" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":306,"completed":298,"skipped":4726,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:37:06.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7424" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":306,"completed":299,"skipped":4775,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 9 lines ...
STEP: Updating configmap projected-configmap-test-upd-7076963f-7479-428b-af5b-9b0c7f8b5b00
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:38:35.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7295" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":306,"completed":300,"skipped":4783,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should print the output to logs [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [k8s.io] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:38:40.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8478" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":306,"completed":301,"skipped":4787,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 23 11:38:40.592: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:38:56.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2275" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":306,"completed":302,"skipped":4826,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:38:59.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5856" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":306,"completed":303,"skipped":4850,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a volume subpath [sig-storage] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Oct 23 11:38:59.867: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [sig-storage] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test substitution in volume subpath
Oct 23 11:39:00.155: INFO: Waiting up to 5m0s for pod "var-expansion-66781e61-8c78-4375-8c5f-0469aa03dabd" in namespace "var-expansion-7288" to be "Succeeded or Failed"
Oct 23 11:39:00.195: INFO: Pod "var-expansion-66781e61-8c78-4375-8c5f-0469aa03dabd": Phase="Pending", Reason="", readiness=false. Elapsed: 40.345115ms
Oct 23 11:39:02.251: INFO: Pod "var-expansion-66781e61-8c78-4375-8c5f-0469aa03dabd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.096110881s
STEP: Saw pod success
Oct 23 11:39:02.251: INFO: Pod "var-expansion-66781e61-8c78-4375-8c5f-0469aa03dabd" satisfied condition "Succeeded or Failed"
Oct 23 11:39:02.325: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod var-expansion-66781e61-8c78-4375-8c5f-0469aa03dabd container dapi-container: <nil>
STEP: delete the pod
Oct 23 11:39:02.883: INFO: Waiting for pod var-expansion-66781e61-8c78-4375-8c5f-0469aa03dabd to disappear
Oct 23 11:39:03.086: INFO: Pod var-expansion-66781e61-8c78-4375-8c5f-0469aa03dabd no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:39:03.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7288" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":306,"completed":304,"skipped":4878,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-f49de14b-05fb-4fa2-abc3-13230e704931
STEP: Creating a pod to test consume configMaps
Oct 23 11:39:03.845: INFO: Waiting up to 5m0s for pod "pod-configmaps-884d9175-f9bd-4cce-994d-eb5cfb750e0c" in namespace "configmap-5546" to be "Succeeded or Failed"
Oct 23 11:39:03.902: INFO: Pod "pod-configmaps-884d9175-f9bd-4cce-994d-eb5cfb750e0c": Phase="Pending", Reason="", readiness=false. Elapsed: 56.489983ms
Oct 23 11:39:05.940: INFO: Pod "pod-configmaps-884d9175-f9bd-4cce-994d-eb5cfb750e0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.094962599s
STEP: Saw pod success
Oct 23 11:39:05.940: INFO: Pod "pod-configmaps-884d9175-f9bd-4cce-994d-eb5cfb750e0c" satisfied condition "Succeeded or Failed"
Oct 23 11:39:05.977: INFO: Trying to get logs from node bootstrap-e2e-minion-group-0324 pod pod-configmaps-884d9175-f9bd-4cce-994d-eb5cfb750e0c container configmap-volume-test: <nil>
STEP: delete the pod
Oct 23 11:39:06.070: INFO: Waiting for pod pod-configmaps-884d9175-f9bd-4cce-994d-eb5cfb750e0c to disappear
Oct 23 11:39:06.108: INFO: Pod pod-configmaps-884d9175-f9bd-4cce-994d-eb5cfb750e0c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 23 11:39:06.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5546" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":306,"completed":305,"skipped":4913,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}
SSSSSSSSSSOct 23 11:39:06.188: INFO: Running AfterSuite actions on all nodes
Oct 23 11:39:06.188: INFO: Running AfterSuite actions on node 1
Oct 23 11:39:06.188: INFO: Skipping dumping logs from cluster

JUnit report was created: /logs/artifacts/after/junit_01.xml
{"msg":"Test Suite completed","total":306,"completed":305,"skipped":4923,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]}


Summarizing 1 Failure:

[Fail] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] [It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

Ran 306 of 5229 Specs in 6532.490 seconds
FAIL! -- 305 Passed | 1 Failed | 0 Pending | 4923 Skipped
--- FAIL: TestE2E (6532.53s)
FAIL

Ginkgo ran 1 suite in 1h48m53.908747014s
Test Suite Failed
2020/10/23 11:39:06 process.go:155: Step './hack/ginkgo-e2e.sh --ginkgo.focus=\[Conformance\] --report-dir=/logs/artifacts/after --disable-log-dump=true' finished in 1h48m55.072300484s
2020/10/23 11:39:06 e2e.go:544: Dumping logs locally to: /logs/artifacts/after
2020/10/23 11:39:06 process.go:153: Running: ./cluster/log-dump/log-dump.sh /logs/artifacts/after
Checking for custom logdump instances, if any
Sourcing kube-util.sh
Detecting project
... skipping 2 lines ...
Zone: us-west1-b
Dumping logs from master locally to '/logs/artifacts/after'
Trying to find master named 'bootstrap-e2e-master'
Looking for address 'bootstrap-e2e-master-ip'
Using master: bootstrap-e2e-master (external IP: 34.82.199.23; internal IP: (not set))
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Changing logfiles to be world-readable for download
Copying 'kube-apiserver.log kube-apiserver-audit.log kube-scheduler.log kube-controller-manager.log etcd.log etcd-events.log glbc.log cluster-autoscaler.log kube-addon-manager.log konnectivity-server.log fluentd.log kubelet.cov startupscript.log' from bootstrap-e2e-master

Specify --start=56989 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/konnectivity-server.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts/after'
Detecting nodes in the cluster
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-0324
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-jt1z
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-xbjm

Specify --start=105385 in the next get-serial-port-output invocation to get only the new output starting from here.

Specify --start=76946 in the next get-serial-port-output invocation to get only the new output starting from here.

Specify --start=85024 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=bootstrap-e2e-minion-group
NODE_NAMES=bootstrap-e2e-minion-group-0324 bootstrap-e2e-minion-group-jt1z bootstrap-e2e-minion-group-xbjm
Failures for bootstrap-e2e-minion-group (if any):
2020/10/23 11:41:22 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts/after' finished in 2m16.025207377s
2020/10/23 11:41:22 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
Project: k8s-jkns-gci-gce-sd-log
... skipping 40 lines ...
Property "users.k8s-jkns-gci-gce-sd-log_bootstrap-e2e-basic-auth" unset.
Property "contexts.k8s-jkns-gci-gce-sd-log_bootstrap-e2e" unset.
Cleared config for k8s-jkns-gci-gce-sd-log_bootstrap-e2e from /workspace/.kube/config
Done
2020/10/23 11:48:04 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 6m42.162036669s
2020/10/23 11:48:04 process.go:96: Saved XML output to /logs/artifacts/after/junit_runner.xml.
2020/10/23 11:48:04 main.go:316: Something went wrong: encountered 1 errors: [error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Conformance\] --report-dir=/logs/artifacts/after --disable-log-dump=true: exit status 1]
Traceback (most recent call last):
  File "../test-infra/scenarios/kubernetes_e2e.py", line 720, in <module>
    main(parse_args())
  File "../test-infra/scenarios/kubernetes_e2e.py", line 570, in main
    mode.start(runner_args)
  File "../test-infra/scenarios/kubernetes_e2e.py", line 228, in start
... skipping 16 lines ...