This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 178 succeeded
Started2020-01-08 14:44
Elapsed4h49m
Revision
Buildergke-prow-default-pool-cf4891d4-rfrs
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/f985f28d-b181-4fc5-9a33-5cf0e69bd666/targets/test'}}
pod3280dda4-3225-11ea-9709-02f27a93e62e
resultstorehttps://source.cloud.google.com/results/invocations/f985f28d-b181-4fc5-9a33-5cf0e69bd666/targets/test
infra-commit99c2f9ae2
job-versionv1.16.5-beta.1.41+2d91933c2bb0a3
master_os_imagecos-73-11647-163-0
node_os_imagecos-73-11647-163-0
pod3280dda4-3225-11ea-9709-02f27a93e62e
revisionv1.16.5-beta.1.41+2d91933c2bb0a3

Test Failures


UpgradeTest 11s

error during kubetest --test --test_args=--ginkgo.focus=\[Feature:MasterUpgrade\] --upgrade-target=ci/k8s-beta --report-dir=/workspace/_artifacts --disable-log-dump=true --report-prefix=upgrade --check-version-skew=false: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 178 Passed Tests

Show 4572 Skipped Tests

Error lines from build-log.txt

... skipping 15 lines ...
I0108 14:44:10.392] process 47 exited with code 0 after 0.0m
I0108 14:44:10.392] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0108 14:44:10.393] Root: /workspace
I0108 14:44:10.393] cd to /workspace
I0108 14:44:10.393] Configure environment...
I0108 14:44:10.393] Call:  git show -s --format=format:%ct HEAD
W0108 14:44:10.397] fatal: not a git repository (or any of the parent directories): .git
I0108 14:44:10.397] process 60 exited with code 128 after 0.0m
W0108 14:44:10.397] Unable to print commit date for HEAD
I0108 14:44:10.397] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0108 14:44:10.889] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0108 14:44:11.209] process 61 exited with code 0 after 0.0m
I0108 14:44:11.210] Call:  gcloud config get-value account
... skipping 379 lines ...
W0108 14:47:50.614] Trying to find master named 'bootstrap-e2e-master'
W0108 14:47:50.614] Looking for address 'bootstrap-e2e-master-ip'
W0108 14:47:51.443] Using master: bootstrap-e2e-master (external IP: 35.247.24.171; internal IP: (not set))
I0108 14:47:51.544] Waiting up to 300 seconds for cluster initialization.
I0108 14:47:51.544] 
I0108 14:47:51.544]   This will continually check to see if the API for kubernetes is reachable.
I0108 14:47:51.544]   This may time out if there was some uncaught error during start up.
I0108 14:47:51.544] 
I0108 14:48:51.196] .............Kubernetes cluster created.
I0108 14:48:51.392] Cluster "gce-cvm-upg-1-3-lat-ctl-skew_bootstrap-e2e" set.
I0108 14:48:51.575] User "gce-cvm-upg-1-3-lat-ctl-skew_bootstrap-e2e" set.
I0108 14:48:51.753] Context "gce-cvm-upg-1-3-lat-ctl-skew_bootstrap-e2e" created.
I0108 14:48:51.963] Switched to context "gce-cvm-upg-1-3-lat-ctl-skew_bootstrap-e2e".
... skipping 145 lines ...
I0108 14:50:21.343]     	If true, adds the file directory to the header
I0108 14:50:21.343]   -allow-gathering-profiles
I0108 14:50:21.346]     	If set to true framework will allow to gather CPU/memory allocation pprof profiles from the master. (default true)
I0108 14:50:21.346]   -allowed-not-ready-nodes int
I0108 14:50:21.346]     	If non-zero, framework will allow for that many non-ready nodes when checking for all ready nodes.
I0108 14:50:21.346]   -alsologtostderr
I0108 14:50:21.347]     	log to standard error as well as files
I0108 14:50:21.347]   -application_metrics_count_limit int
I0108 14:50:21.347]     	Max number of application metrics to store (per container) (default 100)
I0108 14:50:21.347]   -boot_id_file string
I0108 14:50:21.347]     	Comma-separated list of files to check for boot-id. Use the first one that exists. (default "/proc/sys/kernel/random/boot_id")
I0108 14:50:21.347]   -cert-dir string
I0108 14:50:21.347]     	Path to the directory containing the certs. Default is empty, which doesn't use certs.
I0108 14:50:21.348]   -clean-start
I0108 14:50:21.348]     	If true, purge all namespaces except default and system before running tests. This serves to Cleanup test namespaces from failed/interrupted e2e runs in a long-lived cluster.
I0108 14:50:21.348]   -cloud-config-file string
I0108 14:50:21.348]     	Cloud config file.  Only required if provider is azure.
I0108 14:50:21.348]   -cloud-provider-gce-l7lb-src-cidrs value
I0108 14:50:21.348]     	CIDRs opened in GCE firewall for L7 LB traffic proxy & health checks (default 130.211.0.0/22,35.191.0.0/16)
I0108 14:50:21.358]   -cloud-provider-gce-lb-src-cidrs value
I0108 14:50:21.358]     	CIDRs opened in GCE firewall for L4 LB traffic proxy & health checks (default 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16)
... skipping 97 lines ...
I0108 14:50:21.409]     	If set, ginkgo will emit node output to files when running in parallel.
I0108 14:50:21.409]   -ginkgo.dryRun
I0108 14:50:21.410]     	If set, ginkgo will walk the test hierarchy without actually running anything.  Best paired with -v.
I0108 14:50:21.410]   -ginkgo.failFast
I0108 14:50:21.414]     	If set, ginkgo will stop running a test suite after a failure occurs.
I0108 14:50:21.414]   -ginkgo.failOnPending
I0108 14:50:21.414]     	If set, ginkgo will mark the test suite as failed if any specs are pending.
I0108 14:50:21.415]   -ginkgo.flakeAttempts int
I0108 14:50:21.415]     	Make up to this many attempts to run each spec. Please note that if any of the attempts succeed, the suite will not be failed. But any failures will still be recorded. (default 1)
I0108 14:50:21.415]   -ginkgo.focus string
I0108 14:50:21.415]     	If set, ginkgo will only run specs that match this regular expression.
I0108 14:50:21.415]   -ginkgo.noColor
I0108 14:50:21.415]     	If set, suppress color output in default reporter.
I0108 14:50:21.415]   -ginkgo.noisyPendings
I0108 14:50:21.416]     	If set, default reporter will shout about pending tests. (default true)
... skipping 73 lines ...
I0108 14:50:21.453]     	If non-empty, use this log file
I0108 14:50:21.453]   -log_file_max_size uint
I0108 14:50:21.454]     	Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
I0108 14:50:21.462]   -logexporter-gcs-path string
I0108 14:50:21.462]     	Path to the GCS artifacts directory to dump logs from nodes. Logexporter gets enabled if this is non-empty.
I0108 14:50:21.462]   -logtostderr
I0108 14:50:21.463]     	log to standard error instead of files (default true)
I0108 14:50:21.463]   -machine_id_file string
I0108 14:50:21.463]     	Comma-separated list of files to check for machine-id. Use the first one that exists. (default "/etc/machine-id,/var/lib/dbus/machine-id")
I0108 14:50:21.463]   -master-os-distro string
I0108 14:50:21.463]     	The OS distribution of cluster master (debian, ubuntu, gci, coreos, or custom). (default "debian")
I0108 14:50:21.463]   -master-tag string
I0108 14:50:21.464]     	Network tags used on master instances. Valid only for gce, gke
... skipping 130 lines ...
I0108 14:50:21.538]   -vmodule value
I0108 14:50:21.538]     	comma-separated list of pattern=N settings for file-filtered logging
I0108 14:50:21.539]   -volume-dir string
I0108 14:50:21.539]     	Path to the directory containing the kubelet volumes. (default "/var/lib/kubelet")
I0108 14:50:21.539] 
I0108 14:50:21.540] Ginkgo ran 1 suite in 3.015568803s
I0108 14:50:21.540] Test Suite Failed
W0108 14:50:21.642] !!! Error in ./hack/ginkgo-e2e.sh:150
W0108 14:50:21.644]   Error in ./hack/ginkgo-e2e.sh:150. '"${ginkgo}" "${ginkgo_args[@]:+${ginkgo_args[@]}}" "${e2e_test}" -- "${auth_config[@]:+${auth_config[@]}}" --ginkgo.flakeAttempts="${FLAKE_ATTEMPTS}" --host="${KUBE_MASTER_URL}" --provider="${KUBERNETES_PROVIDER}" --gce-project="${PROJECT:-}" --gce-zone="${ZONE:-}" --gce-region="${REGION:-}" --gce-multizone="${MULTIZONE:-false}" --gke-cluster="${CLUSTER_NAME:-}" --kube-master="${KUBE_MASTER:-}" --cluster-tag="${CLUSTER_ID:-}" --cloud-config-file="${CLOUD_CONFIG:-}" --repo-root="${KUBE_ROOT}" --node-instance-group="${NODE_INSTANCE_GROUP:-}" --prefix="${KUBE_GCE_INSTANCE_PREFIX:-e2e}" --network="${KUBE_GCE_NETWORK:-${KUBE_GKE_NETWORK:-e2e}}" --node-tag="${NODE_TAG:-}" --master-tag="${MASTER_TAG:-}" --cluster-monitoring-mode="${KUBE_ENABLE_CLUSTER_MONITORING:-standalone}" --dns-domain="${KUBE_DNS_DOMAIN:-cluster.local}" --ginkgo.slowSpecThreshold="${GINKGO_SLOW_SPEC_THRESHOLD:-300}" ${KUBE_CONTAINER_RUNTIME:+"--container-runtime=${KUBE_CONTAINER_RUNTIME}"} ${MASTER_OS_DISTRIBUTION:+"--master-os-distro=${MASTER_OS_DISTRIBUTION}"} ${NODE_OS_DISTRIBUTION:+"--node-os-distro=${NODE_OS_DISTRIBUTION}"} ${NUM_NODES:+"--num-nodes=${NUM_NODES}"} ${E2E_REPORT_DIR:+"--report-dir=${E2E_REPORT_DIR}"} ${E2E_REPORT_PREFIX:+"--report-prefix=${E2E_REPORT_PREFIX}"} "${@:-}"' exited with status 1
W0108 14:50:21.644] Call stack:
W0108 14:50:21.644]   1: ./hack/ginkgo-e2e.sh:150 main(...)
W0108 14:50:21.645] Exiting with status 1
W0108 14:50:21.646] 2020/01/08 14:50:21 process.go:155: Step './hack/ginkgo-e2e.sh --ginkgo.focus=\[Feature:MasterUpgrade\] --upgrade-target=ci/k8s-beta --report-dir=/workspace/_artifacts --disable-log-dump=true --report-prefix=upgrade' finished in 4.732156154s
W0108 14:50:21.647] 2020/01/08 14:50:21 main.go:316: Something went wrong: encountered 1 errors: [error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Feature:MasterUpgrade\] --upgrade-target=ci/k8s-beta --report-dir=/workspace/_artifacts --disable-log-dump=true --report-prefix=upgrade: exit status 1]
W0108 14:50:21.647] 2020/01/08 14:50:21 process.go:155: Step 'kubetest --test --test_args=--ginkgo.focus=\[Feature:MasterUpgrade\] --upgrade-target=ci/k8s-beta --report-dir=/workspace/_artifacts --disable-log-dump=true --report-prefix=upgrade --check-version-skew=false' finished in 11.175332766s
W0108 14:50:21.648] 2020/01/08 14:50:21 process.go:153: Running: ./hack/e2e-internal/e2e-status.sh
W0108 14:50:21.648] Project: gce-cvm-upg-1-3-lat-ctl-skew
W0108 14:50:21.650] Network Project: gce-cvm-upg-1-3-lat-ctl-skew
W0108 14:50:21.650] Zone: us-west1-b
I0108 14:50:22.347] Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.5-beta.1.41+2d91933c2bb0a3", GitCommit:"2d91933c2bb0a3a8794f5e9b5b1024b7bc18836e", GitTreeState:"clean", BuildDate:"2020-01-07T19:52:18Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
... skipping 688 lines ...
I0108 14:56:26.806] 
I0108 14:56:26.806]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:151
I0108 14:56:26.807] ------------------------------
I0108 14:56:26.807] SS
I0108 14:56:26.807] ------------------------------
I0108 14:56:26.807] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath 
I0108 14:56:26.808]   should fail if subpath file is outside the volume [Slow][LinuxOnly]
I0108 14:56:26.808]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:239
I0108 14:56:26.808] [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
I0108 14:56:26.808]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93
I0108 14:56:26.808] [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
I0108 14:56:26.809]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0108 14:56:26.809] STEP: Creating a kubernetes client
I0108 14:56:26.809] Jan  8 14:56:26.798: INFO: >>> kubeConfig: /workspace/.kube/config
I0108 14:56:26.809] STEP: Building a namespace api object, basename provisioning
I0108 14:56:26.954] STEP: Waiting for a default service account to be provisioned in namespace
I0108 14:56:26.991] [It] should fail if subpath file is outside the volume [Slow][LinuxOnly]
I0108 14:56:26.992]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:239
I0108 14:56:26.992] STEP: deploying csi gce-pd driver
I0108 14:56:27.029] Jan  8 14:56:27.029: INFO: Found CI service account key at /etc/service-account/service-account.json
I0108 14:56:27.030] Jan  8 14:56:27.029: INFO: Running cp [/etc/service-account/service-account.json /tmp/faa24405-df16-4b48-9de6-4d5df8751850/cloud-sa.json]
I0108 14:56:27.071] Jan  8 14:56:27.071: INFO: Shredding file /tmp/faa24405-df16-4b48-9de6-4d5df8751850/cloud-sa.json
I0108 14:56:27.072] Jan  8 14:56:27.071: INFO: Running shred [--remove /tmp/faa24405-df16-4b48-9de6-4d5df8751850/cloud-sa.json]
... skipping 21 lines ...
I0108 14:56:27.881] Jan  8 14:56:27.880: INFO: Test running for native CSI Driver, not checking metrics
I0108 14:56:27.881] Jan  8 14:56:27.880: INFO: Creating resource for dynamic PV
I0108 14:56:27.881] STEP: creating a StorageClass provisioning-3420-pd.csi.storage.gke.io-scff2vh
I0108 14:56:27.966] STEP: creating a claim
I0108 14:56:27.967] Jan  8 14:56:27.966: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
I0108 14:56:28.060] STEP: Creating pod pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-wb6f
I0108 14:56:28.104] STEP: Checking for subpath error in container status
I0108 14:56:56.181] Jan  8 14:56:56.181: INFO: Deleting pod "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-wb6f" in namespace "provisioning-3420"
I0108 14:56:56.226] Jan  8 14:56:56.226: INFO: Wait up to 5m0s for pod "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-wb6f" to be fully deleted
I0108 14:57:06.304] STEP: Deleting pod
I0108 14:57:06.305] Jan  8 14:57:06.304: INFO: Deleting pod "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-wb6f" in namespace "provisioning-3420"
I0108 14:57:06.342] STEP: Deleting pvc
I0108 14:57:06.343] Jan  8 14:57:06.342: INFO: Deleting PersistentVolumeClaim "pd.csi.storage.gke.ioncsjr"
... skipping 668 lines ...
I0108 15:00:58.839] Jan  8 15:00:58.836: INFO: 	Container heapster-nanny ready: true, restart count 1
I0108 15:00:58.840] Jan  8 15:00:58.836: INFO: fluentd-gcp-v3.2.0-94hpj from kube-system started at 2020-01-08 14:49:53 +0000 UTC (2 container statuses recorded)
I0108 15:00:58.840] Jan  8 15:00:58.836: INFO: 	Container fluentd-gcp ready: true, restart count 1
I0108 15:00:58.840] Jan  8 15:00:58.836: INFO: 	Container prometheus-to-sd-exporter ready: true, restart count 1
I0108 15:00:58.840] [It] validates MaxPods limit number of pods that are allowed to run [Slow]
I0108 15:00:58.840]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:123
I0108 15:00:58.847] Jan  8 15:00:58.836: INFO: Node: {{ } {bootstrap-e2e-minion-group-d6hq   /api/v1/nodes/bootstrap-e2e-minion-group-d6hq c4f9bfb7-e8cf-4f82-95c3-3d459bf2d2d6 3041 0 2020-01-08 14:49:14 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-d6hq kubernetes.io/os:linux topology.gke.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"pd.csi.storage.gke.io":"projects/gce-cvm-upg-1-3-lat-ctl-skew/zones/us-west1-b/instances/bootstrap-e2e-minion-group-d6hq"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []} {10.64.0.0/24 [10.64.0.0/24] gce://gce-cvm-upg-1-3-lat-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-d6hq false [] nil } {map[attachable-volumes-gce-pd:{{127 0} {<nil>} 127 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} ephemeral-storage:{{101241290752 0} {<nil>}  BinarySI} hugepages-2Mi:{{0 0} {<nil>} 0 DecimalSI} memory:{{7841861632 0} {<nil>} 7658068Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[attachable-volumes-gce-pd:{{127 0} {<nil>} 127 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} ephemeral-storage:{{91117161526 0} {<nil>} 91117161526 DecimalSI} hugepages-2Mi:{{0 0} {<nil>} 0 DecimalSI} memory:{{7579717632 0} {<nil>} 7402068Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{FrequentContainerdRestart False 2020-01-08 15:00:39 +0000 UTC 2020-01-08 14:59:37 +0000 UTC NoFrequentContainerdRestart containerd is functioning properly} {KernelDeadlock False 2020-01-08 15:00:39 +0000 UTC 2020-01-08 14:59:37 +0000 UTC KernelHasNoDeadlock kernel has no deadlock} {ReadonlyFilesystem False 2020-01-08 15:00:39 +0000 UTC 2020-01-08 14:59:37 +0000 UTC FilesystemIsNotReadOnly Filesystem is not read-only} {CorruptDockerOverlay2 False 2020-01-08 15:00:39 +0000 UTC 2020-01-08 14:59:37 +0000 UTC NoCorruptDockerOverlay2 docker overlay2 is functioning properly} {FrequentUnregisterNetDevice False 2020-01-08 15:00:39 +0000 UTC 2020-01-08 14:59:37 +0000 UTC NoFrequentUnregisterNetDevice node is functioning properly} {FrequentKubeletRestart False 2020-01-08 15:00:39 +0000 UTC 2020-01-08 14:59:37 +0000 UTC NoFrequentKubeletRestart kubelet is functioning properly} {FrequentDockerRestart False 2020-01-08 15:00:39 +0000 UTC 2020-01-08 14:59:37 +0000 UTC NoFrequentDockerRestart docker is functioning properly} {NetworkUnavailable False 2020-01-08 14:49:25 +0000 UTC 2020-01-08 14:49:25 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2020-01-08 15:00:48 +0000 UTC 2020-01-08 14:59:38 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-01-08 15:00:48 +0000 UTC 2020-01-08 14:59:38 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-01-08 15:00:48 +0000 UTC 2020-01-08 14:59:38 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-01-08 15:00:48 +0000 UTC 2020-01-08 14:59:38 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}] [{InternalIP 10.138.0.3} {ExternalIP 35.233.241.62} {InternalDNS bootstrap-e2e-minion-group-d6hq.c.gce-cvm-upg-1-3-lat-ctl-skew.internal} {Hostname bootstrap-e2e-minion-group-d6hq.c.gce-cvm-upg-1-3-lat-ctl-skew.internal}] {{10250}} {15c333afb2c43f06d8efd7d9ce16cba3 15C333AF-B2C4-3F06-D8EF-D7D9CE16CBA3 58d07a61-4d90-40a1-b0e0-149682930812 4.14.94+ Container-Optimized OS from Google docker://18.9.3 v1.16.5-beta.1.41+2d91933c2bb0a3 v1.16.5-beta.1.41+2d91933c2bb0a3 linux amd64} [{[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:6c8574a40816676cd908cfa89d16463002b56ca05fa76d0c912e116bc0ab867e gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.8] 264721247} {[gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver@sha256:0c2841a57382023dda9aa864bd3122978c7071dd26e2f8302cc1f7b93f920088 gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.5.2-gke.0] 138042636} {[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1] 121711221} {[k8s.gcr.io/kube-proxy-amd64:v1.16.5-beta.1.41_2d91933c2bb0a3] 96689662} {[k8s.gcr.io/fluentd-gcp-scaler@sha256:4f28f10fb89506768910b858f7a18ffb996824a16d70d5ac895e49687df9ff58 k8s.gcr.io/fluentd-gcp-scaler:0.5.2] 90498960} {[k8s.gcr.io/heapster-amd64@sha256:9fae0af136ce0cf4f88393b3670f7139ffc464692060c374d2ae748e13144521 k8s.gcr.io/heapster-amd64:v1.6.0-beta.1] 76016169} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[k8s.gcr.io/event-exporter@sha256:06acf489ab092b4fb49273e426549a52c0fcd1dbcb67e03d5935b5ee1a899c3e k8s.gcr.io/event-exporter:v0.2.5] 47261019} {[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2] 44100963} {[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0] 41861013} {[k8s.gcr.io/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b k8s.gcr.io/metrics-server-amd64:v0.3.6] 39944451} {[k8s.gcr.io/addon-resizer@sha256:30b3b12e471c534949e12d2da958fdf33848d153f2a0a88565bdef7ca999b5ad k8s.gcr.io/addon-resizer:1.8.7] 37930718} {[gcr.io/gke-release/csi-node-driver-registrar@sha256:7de27ed3118f0bea834cc45edaaa88f83ae3180f6977bf4cccfe00725674a22d gcr.io/gke-release/csi-node-driver-registrar:v1.1.0-gke.0] 17190835} {[k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64@sha256:d83d8a481145d0eb71f8bd71ae236d1c6a931dd3bdcaf80919a8ec4a4d8aff74 k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0] 13513083} {[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12] 11337839} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}] [] [] &NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,}}}
I0108 15:00:58.852] Jan  8 15:00:58.837: INFO: Node: {{ } {bootstrap-e2e-minion-group-g8wm   /api/v1/nodes/bootstrap-e2e-minion-group-g8wm 1b67dd69-17dc-4357-899a-ba60798f1e36 3044 0 2020-01-08 14:49:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-g8wm kubernetes.io/os:linux topology.gke.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"pd.csi.storage.gke.io":"projects/gce-cvm-upg-1-3-lat-ctl-skew/zones/us-west1-b/instances/bootstrap-e2e-minion-group-g8wm"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []} {10.64.2.0/24 [10.64.2.0/24] gce://gce-cvm-upg-1-3-lat-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-g8wm false [] nil } {map[attachable-volumes-gce-pd:{{127 0} {<nil>} 127 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} ephemeral-storage:{{101241290752 0} {<nil>}  BinarySI} hugepages-2Mi:{{0 0} {<nil>} 0 DecimalSI} memory:{{7841853440 0} {<nil>} 7658060Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[attachable-volumes-gce-pd:{{127 0} {<nil>} 127 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} ephemeral-storage:{{91117161526 0} {<nil>} 91117161526 DecimalSI} hugepages-2Mi:{{0 0} {<nil>} 0 DecimalSI} memory:{{7579709440 0} {<nil>} 7402060Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{CorruptDockerOverlay2 False 2020-01-08 15:00:39 +0000 UTC 2020-01-08 14:59:37 +0000 UTC NoCorruptDockerOverlay2 docker overlay2 is functioning properly} {KernelDeadlock False 2020-01-08 15:00:39 +0000 UTC 2020-01-08 14:59:37 +0000 UTC KernelHasNoDeadlock kernel has no deadlock} {ReadonlyFilesystem False 2020-01-08 15:00:39 +0000 UTC 2020-01-08 14:59:37 +0000 UTC FilesystemIsNotReadOnly Filesystem is not read-only} {FrequentUnregisterNetDevice False 2020-01-08 15:00:39 +0000 UTC 2020-01-08 14:59:37 +0000 UTC NoFrequentUnregisterNetDevice node is functioning properly} {FrequentKubeletRestart False 2020-01-08 15:00:39 +0000 UTC 2020-01-08 14:59:37 +0000 UTC NoFrequentKubeletRestart kubelet is functioning properly} {FrequentDockerRestart False 2020-01-08 15:00:39 +0000 UTC 2020-01-08 14:59:37 +0000 UTC NoFrequentDockerRestart docker is functioning properly} {FrequentContainerdRestart False 2020-01-08 15:00:39 +0000 UTC 2020-01-08 14:59:37 +0000 UTC NoFrequentContainerdRestart containerd is functioning properly} {NetworkUnavailable False 2020-01-08 14:49:36 +0000 UTC 2020-01-08 14:49:36 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2020-01-08 15:00:48 +0000 UTC 2020-01-08 14:59:38 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-01-08 15:00:48 +0000 UTC 2020-01-08 14:59:38 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-01-08 15:00:48 +0000 UTC 2020-01-08 14:59:38 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-01-08 15:00:48 +0000 UTC 2020-01-08 14:59:38 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}] [{InternalIP 10.138.0.5} {ExternalIP 34.82.30.218} {InternalDNS bootstrap-e2e-minion-group-g8wm.c.gce-cvm-upg-1-3-lat-ctl-skew.internal} {Hostname bootstrap-e2e-minion-group-g8wm.c.gce-cvm-upg-1-3-lat-ctl-skew.internal}] {{10250}} {02c2f3aaeeb5f4449b3fccda344aae00 02C2F3AA-EEB5-F444-9B3F-CCDA344AAE00 8619fe27-4d90-488d-b57e-2bc987a5ae64 4.14.94+ Container-Optimized OS from Google docker://18.9.3 v1.16.5-beta.1.41+2d91933c2bb0a3 v1.16.5-beta.1.41+2d91933c2bb0a3 linux amd64} [{[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0] 332011484} {[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:6c8574a40816676cd908cfa89d16463002b56ca05fa76d0c912e116bc0ab867e gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.8] 264721247} {[gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver@sha256:0c2841a57382023dda9aa864bd3122978c7071dd26e2f8302cc1f7b93f920088 gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.5.2-gke.0] 138042636} {[k8s.gcr.io/kube-proxy-amd64:v1.16.5-beta.1.41_2d91933c2bb0a3] 96689662} {[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1] 54431016} {[gcr.io/gke-release/csi-attacher@sha256:0d53e62ad3d025e1f5f148c22101cb76393619b0804d5757f27220002aabb4cc gcr.io/gke-release/csi-attacher:v1.2.0-gke.0] 51424703} {[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0] 41861013} {[k8s.gcr.io/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b k8s.gcr.io/metrics-server-amd64:v0.3.6] 39944451} {[k8s.gcr.io/addon-resizer@sha256:30b3b12e471c534949e12d2da958fdf33848d153f2a0a88565bdef7ca999b5ad k8s.gcr.io/addon-resizer:1.8.7] 37930718} {[gcr.io/gke-release/csi-node-driver-registrar@sha256:7de27ed3118f0bea834cc45edaaa88f83ae3180f6977bf4cccfe00725674a22d gcr.io/gke-release/csi-node-driver-registrar:v1.1.0-gke.0] 17190835} {[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12] 11337839} {[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}] [] [] &NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,}}}
I0108 15:00:58.857] Jan  8 15:00:58.837: INFO: Node: {{ } {bootstrap-e2e-minion-group-t3jj   /api/v1/nodes/bootstrap-e2e-minion-group-t3jj b6e63189-49df-44bc-a9bc-330cc685e3e5 3049 0 2020-01-08 14:49:15 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-t3jj kubernetes.io/os:linux topology.gke.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"pd.csi.storage.gke.io":"projects/gce-cvm-upg-1-3-lat-ctl-skew/zones/us-west1-b/instances/bootstrap-e2e-minion-group-t3jj"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []} {10.64.1.0/24 [10.64.1.0/24] gce://gce-cvm-upg-1-3-lat-ctl-skew/us-west1-b/bootstrap-e2e-minion-group-t3jj false [] nil } {map[attachable-volumes-gce-pd:{{127 0} {<nil>} 127 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} ephemeral-storage:{{101241290752 0} {<nil>}  BinarySI} hugepages-2Mi:{{0 0} {<nil>} 0 DecimalSI} memory:{{7841861632 0} {<nil>} 7658068Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[attachable-volumes-gce-pd:{{127 0} {<nil>} 127 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} ephemeral-storage:{{91117161526 0} {<nil>} 91117161526 DecimalSI} hugepages-2Mi:{{0 0} {<nil>} 0 DecimalSI} memory:{{7579717632 0} {<nil>} 7402068Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{FrequentContainerdRestart False 2020-01-08 15:00:41 +0000 UTC 2020-01-08 14:59:39 +0000 UTC NoFrequentContainerdRestart containerd is functioning properly} {KernelDeadlock False 2020-01-08 15:00:41 +0000 UTC 2020-01-08 14:59:39 +0000 UTC KernelHasNoDeadlock kernel has no deadlock} {ReadonlyFilesystem False 2020-01-08 15:00:41 +0000 UTC 2020-01-08 14:59:39 +0000 UTC FilesystemIsNotReadOnly Filesystem is not read-only} {FrequentUnregisterNetDevice False 2020-01-08 15:00:41 +0000 UTC 2020-01-08 14:59:39 +0000 UTC NoFrequentUnregisterNetDevice node is functioning properly} {CorruptDockerOverlay2 False 2020-01-08 15:00:41 +0000 UTC 2020-01-08 14:59:39 +0000 UTC NoCorruptDockerOverlay2 docker overlay2 is functioning properly} {FrequentKubeletRestart False 2020-01-08 15:00:41 +0000 UTC 2020-01-08 14:59:39 +0000 UTC NoFrequentKubeletRestart kubelet is functioning properly} {FrequentDockerRestart False 2020-01-08 15:00:41 +0000 UTC 2020-01-08 14:59:39 +0000 UTC NoFrequentDockerRestart docker is functioning properly} {NetworkUnavailable False 2020-01-08 14:49:25 +0000 UTC 2020-01-08 14:49:25 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2020-01-08 15:00:50 +0000 UTC 2020-01-08 14:59:40 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-01-08 15:00:50 +0000 UTC 2020-01-08 14:59:40 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-01-08 15:00:50 +0000 UTC 2020-01-08 14:59:40 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-01-08 15:00:50 +0000 UTC 2020-01-08 14:59:40 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}] [{InternalIP 10.138.0.4} {ExternalIP 35.199.190.183} {InternalDNS bootstrap-e2e-minion-group-t3jj.c.gce-cvm-upg-1-3-lat-ctl-skew.internal} {Hostname bootstrap-e2e-minion-group-t3jj.c.gce-cvm-upg-1-3-lat-ctl-skew.internal}] {{10250}} {a505822bfab6c217b75673bd59dffb4d A505822B-FAB6-C217-B756-73BD59DFFB4D d373b321-876b-466c-86bd-5f52d4273f73 4.14.94+ Container-Optimized OS from Google docker://18.9.3 v1.16.5-beta.1.41+2d91933c2bb0a3 v1.16.5-beta.1.41+2d91933c2bb0a3 linux amd64} [{[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:6c8574a40816676cd908cfa89d16463002b56ca05fa76d0c912e116bc0ab867e gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.8] 264721247} {[gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver@sha256:0c2841a57382023dda9aa864bd3122978c7071dd26e2f8302cc1f7b93f920088 gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.5.2-gke.0] 138042636} {[k8s.gcr.io/kube-proxy-amd64:v1.16.5-beta.1.41_2d91933c2bb0a3] 96689662} {[k8s.gcr.io/heapster-amd64@sha256:9fae0af136ce0cf4f88393b3670f7139ffc464692060c374d2ae748e13144521 k8s.gcr.io/heapster-amd64:v1.6.0-beta.1] 76016169} {[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2] 44100963} {[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0] 41861013} {[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:a2db01cfd2ae1a16f0feef274160c659c1ac5aa433e1c514de20e334cb66c674 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1] 40067731} {[k8s.gcr.io/addon-resizer@sha256:30b3b12e471c534949e12d2da958fdf33848d153f2a0a88565bdef7ca999b5ad k8s.gcr.io/addon-resizer:1.8.7] 37930718} {[gcr.io/gke-release/csi-node-driver-registrar@sha256:7de27ed3118f0bea834cc45edaaa88f83ae3180f6977bf4cccfe00725674a22d gcr.io/gke-release/csi-node-driver-registrar:v1.1.0-gke.0] 17190835} {[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12] 11337839} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}] [] [] &NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,}}}
I0108 15:00:58.879] STEP: Starting additional 312 Pods to fully saturate the cluster max pods and trying to start another one
I0108 15:01:12.589] Jan  8 15:01:12.588: INFO: Waiting for running...
I0108 15:03:27.881] STEP: Considering event: 
I0108 15:03:27.881] Type = [Normal], Name = [maxp-0.15e7f170464daedf], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6852/maxp-0 to bootstrap-e2e-minion-group-t3jj]
I0108 15:03:27.881] STEP: Considering event: 
I0108 15:03:27.881] Type = [Normal], Name = [maxp-0.15e7f17269eac618], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
... skipping 2699 lines ...
I0108 15:05:06.555] 
I0108 15:05:06.555]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:146
I0108 15:05:06.555] ------------------------------
I0108 15:05:06.555] S
I0108 15:05:06.556] ------------------------------
I0108 15:05:06.556] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode 
I0108 15:05:06.556]   should fail to use a volume in a pod with mismatched mode [Slow]
I0108 15:05:06.556]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:278
I0108 15:05:06.556] [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
I0108 15:05:06.557]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93
I0108 15:05:06.557] Jan  8 15:05:06.540: INFO: Driver local doesn't support DynamicPV -- skipping
I0108 15:05:06.557] [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
I0108 15:05:06.557]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
... skipping 3 lines ...
I0108 15:05:06.558] [sig-storage] In-tree Volumes
I0108 15:05:06.558] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
I0108 15:05:06.558]   [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial]
I0108 15:05:06.559]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69
I0108 15:05:06.559]     [Testpattern: Dynamic PV (block volmode)] volumeMode
I0108 15:05:06.559]     /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
I0108 15:05:06.559]       should fail to use a volume in a pod with mismatched mode [Slow] [BeforeEach]
I0108 15:05:06.559]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:278
I0108 15:05:06.560] 
I0108 15:05:06.560]       Driver local doesn't support DynamicPV -- skipping
I0108 15:05:06.560] 
I0108 15:05:06.560]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:146
I0108 15:05:06.560] ------------------------------
... skipping 151 lines ...
I0108 15:05:06.587] 
I0108 15:05:06.587]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:151
I0108 15:05:06.587] ------------------------------
I0108 15:05:06.587] SS
I0108 15:05:06.587] ------------------------------
I0108 15:05:06.587] [sig-apps] Daemon set [Serial] 
I0108 15:05:06.587]   should retry creating failed daemon pods [Conformance]
I0108 15:05:06.588]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0108 15:05:06.588] [BeforeEach] [sig-apps] Daemon set [Serial]
I0108 15:05:06.588]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0108 15:05:06.588] STEP: Creating a kubernetes client
I0108 15:05:06.588] Jan  8 15:05:06.548: INFO: >>> kubeConfig: /workspace/.kube/config
I0108 15:05:06.588] STEP: Building a namespace api object, basename daemonsets
I0108 15:05:06.712] STEP: Waiting for a default service account to be provisioned in namespace
I0108 15:05:06.749] [BeforeEach] [sig-apps] Daemon set [Serial]
I0108 15:05:06.750]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
I0108 15:05:06.992] [It] should retry creating failed daemon pods [Conformance]
I0108 15:05:06.992]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0108 15:05:06.992] STEP: Creating a simple DaemonSet "daemon-set"
I0108 15:05:07.033] STEP: Check that daemon pods launch on every node of the cluster.
I0108 15:05:07.121] Jan  8 15:05:07.121: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0108 15:05:07.178] Jan  8 15:05:07.178: INFO: Number of nodes with available pods: 0
I0108 15:05:07.179] Jan  8 15:05:07.178: INFO: Node bootstrap-e2e-minion-group-d6hq is running more than one daemon pod
... skipping 18 lines ...
I0108 15:05:14.218] Jan  8 15:05:14.217: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0108 15:05:14.255] Jan  8 15:05:14.255: INFO: Number of nodes with available pods: 2
I0108 15:05:14.256] Jan  8 15:05:14.255: INFO: Node bootstrap-e2e-minion-group-t3jj is running more than one daemon pod
I0108 15:05:15.218] Jan  8 15:05:15.217: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0108 15:05:15.257] Jan  8 15:05:15.257: INFO: Number of nodes with available pods: 3
I0108 15:05:15.257] Jan  8 15:05:15.257: INFO: Number of running nodes: 3, number of available pods: 3
I0108 15:05:15.295] STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
I0108 15:05:15.429] Jan  8 15:05:15.429: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0108 15:05:15.470] Jan  8 15:05:15.469: INFO: Number of nodes with available pods: 2
I0108 15:05:15.470] Jan  8 15:05:15.469: INFO: Node bootstrap-e2e-minion-group-t3jj is running more than one daemon pod
I0108 15:05:16.514] Jan  8 15:05:16.513: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0108 15:05:16.557] Jan  8 15:05:16.557: INFO: Number of nodes with available pods: 2
I0108 15:05:16.558] Jan  8 15:05:16.557: INFO: Node bootstrap-e2e-minion-group-t3jj is running more than one daemon pod
I0108 15:05:17.509] Jan  8 15:05:17.508: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0108 15:05:17.547] Jan  8 15:05:17.547: INFO: Number of nodes with available pods: 2
I0108 15:05:17.547] Jan  8 15:05:17.547: INFO: Node bootstrap-e2e-minion-group-t3jj is running more than one daemon pod
I0108 15:05:18.510] Jan  8 15:05:18.509: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0108 15:05:18.548] Jan  8 15:05:18.548: INFO: Number of nodes with available pods: 3
I0108 15:05:18.548] Jan  8 15:05:18.548: INFO: Number of running nodes: 3, number of available pods: 3
I0108 15:05:18.548] STEP: Wait for the failed daemon pod to be completely deleted.
I0108 15:05:18.585] [AfterEach] [sig-apps] Daemon set [Serial]
I0108 15:05:18.586]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
I0108 15:05:18.625] STEP: Deleting DaemonSet "daemon-set"
I0108 15:05:18.625] STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5236, will wait for the garbage collector to delete the pods
I0108 15:05:18.758] Jan  8 15:05:18.756: INFO: Deleting DaemonSet.extensions daemon-set took: 42.908474ms
I0108 15:05:19.457] Jan  8 15:05:19.456: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.327509ms
... skipping 35 lines ...
I0108 15:05:38.117] 
I0108 15:05:38.117]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:146
I0108 15:05:38.117] ------------------------------
I0108 15:05:38.118] SSSSSSS
I0108 15:05:38.118] ------------------------------
I0108 15:05:38.118] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode 
I0108 15:05:38.118]   should fail to use a volume in a pod with mismatched mode [Slow]
I0108 15:05:38.119]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:278
I0108 15:05:38.119] [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
I0108 15:05:38.119]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93
I0108 15:05:38.119] [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
I0108 15:05:38.119]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0108 15:05:38.120] STEP: Creating a kubernetes client
I0108 15:05:38.120] Jan  8 15:05:38.113: INFO: >>> kubeConfig: /workspace/.kube/config
I0108 15:05:38.120] STEP: Building a namespace api object, basename volumemode
I0108 15:05:38.274] STEP: Waiting for a default service account to be provisioned in namespace
I0108 15:05:38.313] [It] should fail to use a volume in a pod with mismatched mode [Slow]
I0108 15:05:38.313]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:278
I0108 15:05:38.313] Jan  8 15:05:38.312: INFO: Driver "local" does not provide raw block - skipping
I0108 15:05:38.313] [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
I0108 15:05:38.314]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0108 15:05:38.314] Jan  8 15:05:38.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0108 15:05:38.352] STEP: Destroying namespace "volumemode-4288" for this suite.
... skipping 4 lines ...
I0108 15:05:45.779] [sig-storage] In-tree Volumes
I0108 15:05:45.779] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
I0108 15:05:45.779]   [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial]
I0108 15:05:45.779]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69
I0108 15:05:45.780]     [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
I0108 15:05:45.780]     /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
I0108 15:05:45.780]       should fail to use a volume in a pod with mismatched mode [Slow] [It]
I0108 15:05:45.780]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:278
I0108 15:05:45.780] 
I0108 15:05:45.780]       Driver "local" does not provide raw block - skipping
I0108 15:05:45.781] 
I0108 15:05:45.781]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:95
I0108 15:05:45.781] ------------------------------