This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 177 succeeded
Started2020-01-11 15:39
Elapsed6h25m
Revision
Buildergke-prow-default-pool-cf4891d4-qq1r
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/738614d6-a561-4b00-94c8-e4624366e719/targets/test'}}
pod6ad31204-3488-11ea-9fef-d200904e1a96
resultstorehttps://source.cloud.google.com/results/invocations/738614d6-a561-4b00-94c8-e4624366e719/targets/test
infra-commitb82ca85d5
job-versionv1.15.8-beta.1.30+14ede42c4fe699
master_os_imagecos-73-11647-163-0
node_os_imagecos-73-11647-163-0
pod6ad31204-3488-11ea-9fef-d200904e1a96
revisionv1.15.8-beta.1.30+14ede42c4fe699

Test Failures


diffResources 0.00s

Error: 1 leaked resources
+default-route-8bd8ac10ff88e99d  default  10.178.0.0/20  default                   1000
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 177 Passed Tests

Show 4252 Skipped Tests

Error lines from build-log.txt

... skipping 15 lines ...
I0111 15:39:12.377] process 48 exited with code 0 after 0.0m
I0111 15:39:12.377] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0111 15:39:12.377] Root: /workspace
I0111 15:39:12.377] cd to /workspace
I0111 15:39:12.377] Configure environment...
I0111 15:39:12.378] Call:  git show -s --format=format:%ct HEAD
W0111 15:39:12.382] fatal: not a git repository (or any of the parent directories): .git
I0111 15:39:12.382] process 61 exited with code 128 after 0.0m
W0111 15:39:12.382] Unable to print commit date for HEAD
I0111 15:39:12.382] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0111 15:39:12.906] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0111 15:39:13.222] process 62 exited with code 0 after 0.0m
I0111 15:39:13.222] Call:  gcloud config get-value account
... skipping 313 lines ...
W0111 15:42:28.125] Trying to find master named 'test-9b5ed62f1f-master'
W0111 15:42:28.125] Looking for address 'test-9b5ed62f1f-master-ip'
W0111 15:42:28.973] Using master: test-9b5ed62f1f-master (external IP: 34.82.149.6)
I0111 15:42:29.074] Waiting up to 300 seconds for cluster initialization.
I0111 15:42:29.074] 
I0111 15:42:29.074]   This will continually check to see if the API for kubernetes is reachable.
I0111 15:42:29.074]   This may time out if there was some uncaught error during start up.
I0111 15:42:29.074] 
I0111 15:43:39.974] ...............Kubernetes cluster created.
I0111 15:43:40.142] Cluster "gce-cvm-upg-1-3-lat-ctl-skew_test-9b5ed62f1f" set.
I0111 15:43:40.311] User "gce-cvm-upg-1-3-lat-ctl-skew_test-9b5ed62f1f" set.
I0111 15:43:40.500] Context "gce-cvm-upg-1-3-lat-ctl-skew_test-9b5ed62f1f" created.
I0111 15:43:40.679] Switched to context "gce-cvm-upg-1-3-lat-ctl-skew_test-9b5ed62f1f".
... skipping 19 lines ...
I0111 15:44:20.609] NAME                                STATUS                     ROLES    AGE   VERSION
I0111 15:44:20.609] test-9b5ed62f1f-master              Ready,SchedulingDisabled   <none>   16s   v1.15.8-beta.1.30+14ede42c4fe699
I0111 15:44:20.609] test-9b5ed62f1f-minion-group-7kkt   Ready                      <none>   17s   v1.15.8-beta.1.30+14ede42c4fe699
I0111 15:44:20.610] test-9b5ed62f1f-minion-group-grlk   Ready                      <none>   18s   v1.15.8-beta.1.30+14ede42c4fe699
I0111 15:44:20.610] test-9b5ed62f1f-minion-group-zz58   Ready                      <none>   16s   v1.15.8-beta.1.30+14ede42c4fe699
I0111 15:44:20.954] Validate output:
I0111 15:44:21.277] NAME                 STATUS    MESSAGE             ERROR
I0111 15:44:21.278] scheduler            Healthy   ok                  
I0111 15:44:21.278] etcd-1               Healthy   {"health":"true"}   
I0111 15:44:21.278] etcd-0               Healthy   {"health":"true"}   
I0111 15:44:21.278] controller-manager   Healthy   ok                  
I0111 15:44:21.286] Cluster validation succeeded
W0111 15:44:21.387] Done, listing cluster services:
... skipping 102 lines ...
I0111 15:44:45.331] 
I0111 15:44:49.436] Jan 11 15:44:49.436: INFO: cluster-master-image: cos-73-11647-163-0
I0111 15:44:49.436] Jan 11 15:44:49.436: INFO: cluster-node-image: cos-73-11647-163-0
I0111 15:44:49.437] Jan 11 15:44:49.436: INFO: >>> kubeConfig: /workspace/.kube/config
I0111 15:44:49.439] Jan 11 15:44:49.439: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
I0111 15:44:49.619] Jan 11 15:44:49.619: INFO: Waiting up to 10m0s for all pods (need at least 8) in namespace 'kube-system' to be running and ready
I0111 15:44:49.774] Jan 11 15:44:49.774: INFO: The status of Pod etcd-server-test-9b5ed62f1f-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:44:49.774] Jan 11 15:44:49.774: INFO: 20 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
I0111 15:44:49.775] Jan 11 15:44:49.774: INFO: expected 9 pod replicas in namespace 'kube-system', 9 are Running and Ready.
I0111 15:44:49.775] Jan 11 15:44:49.774: INFO: POD                                 NODE                    PHASE    GRACE  CONDITIONS
I0111 15:44:49.775] Jan 11 15:44:49.774: INFO: etcd-server-test-9b5ed62f1f-master  test-9b5ed62f1f-master  Pending         []
I0111 15:44:49.775] Jan 11 15:44:49.774: INFO: 
I0111 15:44:51.890] Jan 11 15:44:51.889: INFO: The status of Pod etcd-server-test-9b5ed62f1f-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:44:51.890] Jan 11 15:44:51.889: INFO: The status of Pod kube-apiserver-test-9b5ed62f1f-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:44:51.890] Jan 11 15:44:51.889: INFO: The status of Pod kube-scheduler-test-9b5ed62f1f-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:44:51.890] Jan 11 15:44:51.889: INFO: 20 / 23 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
I0111 15:44:51.890] Jan 11 15:44:51.889: INFO: expected 9 pod replicas in namespace 'kube-system', 9 are Running and Ready.
I0111 15:44:51.891] Jan 11 15:44:51.889: INFO: POD                                    NODE                    PHASE    GRACE  CONDITIONS
I0111 15:44:51.891] Jan 11 15:44:51.889: INFO: etcd-server-test-9b5ed62f1f-master     test-9b5ed62f1f-master  Pending         []
I0111 15:44:51.891] Jan 11 15:44:51.889: INFO: kube-apiserver-test-9b5ed62f1f-master  test-9b5ed62f1f-master  Pending         []
I0111 15:44:51.891] Jan 11 15:44:51.889: INFO: kube-scheduler-test-9b5ed62f1f-master  test-9b5ed62f1f-master  Pending         []
I0111 15:44:51.891] Jan 11 15:44:51.889: INFO: 
I0111 15:44:53.885] Jan 11 15:44:53.885: INFO: The status of Pod etcd-server-test-9b5ed62f1f-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:44:53.886] Jan 11 15:44:53.885: INFO: The status of Pod kube-apiserver-test-9b5ed62f1f-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:44:53.886] Jan 11 15:44:53.885: INFO: The status of Pod kube-controller-manager-test-9b5ed62f1f-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:44:53.886] Jan 11 15:44:53.885: INFO: The status of Pod kube-scheduler-test-9b5ed62f1f-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:44:53.886] Jan 11 15:44:53.885: INFO: The status of Pod l7-lb-controller-v1.2.3-test-9b5ed62f1f-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:44:53.887] Jan 11 15:44:53.885: INFO: 20 / 25 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
I0111 15:44:53.887] Jan 11 15:44:53.885: INFO: expected 9 pod replicas in namespace 'kube-system', 9 are Running and Ready.
I0111 15:44:53.887] Jan 11 15:44:53.885: INFO: POD                                             NODE                    PHASE    GRACE  CONDITIONS
I0111 15:44:53.887] Jan 11 15:44:53.885: INFO: etcd-server-test-9b5ed62f1f-master              test-9b5ed62f1f-master  Pending         []
I0111 15:44:53.888] Jan 11 15:44:53.885: INFO: kube-apiserver-test-9b5ed62f1f-master           test-9b5ed62f1f-master  Pending         []
I0111 15:44:53.888] Jan 11 15:44:53.885: INFO: kube-controller-manager-test-9b5ed62f1f-master  test-9b5ed62f1f-master  Pending         []
I0111 15:44:53.888] Jan 11 15:44:53.885: INFO: kube-scheduler-test-9b5ed62f1f-master           test-9b5ed62f1f-master  Pending         []
I0111 15:44:53.888] Jan 11 15:44:53.885: INFO: l7-lb-controller-v1.2.3-test-9b5ed62f1f-master  test-9b5ed62f1f-master  Pending         []
I0111 15:44:53.888] Jan 11 15:44:53.885: INFO: 
I0111 15:44:55.886] Jan 11 15:44:55.885: INFO: The status of Pod etcd-empty-dir-cleanup-test-9b5ed62f1f-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:44:55.886] Jan 11 15:44:55.885: INFO: The status of Pod etcd-server-test-9b5ed62f1f-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:44:55.886] Jan 11 15:44:55.885: INFO: The status of Pod fluentd-gcp-v3.2.0-nz6vd is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:44:55.887] Jan 11 15:44:55.885: INFO: The status of Pod kube-apiserver-test-9b5ed62f1f-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:44:55.887] Jan 11 15:44:55.885: INFO: The status of Pod kube-controller-manager-test-9b5ed62f1f-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:44:55.887] Jan 11 15:44:55.886: INFO: The status of Pod kube-scheduler-test-9b5ed62f1f-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:44:55.888] Jan 11 15:44:55.886: INFO: The status of Pod l7-lb-controller-v1.2.3-test-9b5ed62f1f-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:44:55.888] Jan 11 15:44:55.886: INFO: 19 / 26 pods in namespace 'kube-system' are running and ready (6 seconds elapsed)
I0111 15:44:55.888] Jan 11 15:44:55.886: INFO: expected 9 pod replicas in namespace 'kube-system', 9 are Running and Ready.
I0111 15:44:55.888] Jan 11 15:44:55.886: INFO: POD                                             NODE                               PHASE    GRACE  CONDITIONS
I0111 15:44:55.888] Jan 11 15:44:55.886: INFO: etcd-empty-dir-cleanup-test-9b5ed62f1f-master   test-9b5ed62f1f-master             Pending         []
I0111 15:44:55.888] Jan 11 15:44:55.886: INFO: etcd-server-test-9b5ed62f1f-master              test-9b5ed62f1f-master             Pending         []
I0111 15:44:55.889] Jan 11 15:44:55.886: INFO: fluentd-gcp-v3.2.0-nz6vd                        test-9b5ed62f1f-minion-group-7kkt  Running  60s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:44:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:44:55 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:44:55 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:44:03 +0000 UTC  }]
I0111 15:44:55.889] Jan 11 15:44:55.886: INFO: kube-apiserver-test-9b5ed62f1f-master           test-9b5ed62f1f-master             Pending         []
I0111 15:44:55.889] Jan 11 15:44:55.886: INFO: kube-controller-manager-test-9b5ed62f1f-master  test-9b5ed62f1f-master             Pending         []
I0111 15:44:55.890] Jan 11 15:44:55.886: INFO: kube-scheduler-test-9b5ed62f1f-master           test-9b5ed62f1f-master             Pending         []
I0111 15:44:55.890] Jan 11 15:44:55.886: INFO: l7-lb-controller-v1.2.3-test-9b5ed62f1f-master  test-9b5ed62f1f-master             Pending         []
I0111 15:44:55.890] Jan 11 15:44:55.886: INFO: 
I0111 15:44:57.938] Jan 11 15:44:57.938: INFO: The status of Pod etcd-empty-dir-cleanup-test-9b5ed62f1f-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:44:57.939] Jan 11 15:44:57.938: INFO: The status of Pod fluentd-gcp-v3.2.0-nz6vd is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:44:57.939] Jan 11 15:44:57.938: INFO: The status of Pod kube-apiserver-test-9b5ed62f1f-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:44:57.939] Jan 11 15:44:57.938: INFO: 23 / 26 pods in namespace 'kube-system' are running and ready (8 seconds elapsed)
I0111 15:44:57.939] Jan 11 15:44:57.938: INFO: expected 9 pod replicas in namespace 'kube-system', 9 are Running and Ready.
I0111 15:44:57.940] Jan 11 15:44:57.938: INFO: POD                                            NODE                               PHASE    GRACE  CONDITIONS
I0111 15:44:57.940] Jan 11 15:44:57.938: INFO: etcd-empty-dir-cleanup-test-9b5ed62f1f-master  test-9b5ed62f1f-master             Pending         []
I0111 15:44:57.940] Jan 11 15:44:57.938: INFO: fluentd-gcp-v3.2.0-nz6vd                       test-9b5ed62f1f-minion-group-7kkt  Running  60s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:44:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:44:55 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:44:55 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:44:03 +0000 UTC  }]
I0111 15:44:57.941] Jan 11 15:44:57.938: INFO: kube-apiserver-test-9b5ed62f1f-master          test-9b5ed62f1f-master             Pending         []
I0111 15:44:57.941] Jan 11 15:44:57.938: INFO: 
I0111 15:44:59.890] Jan 11 15:44:59.889: INFO: The status of Pod fluentd-gcp-v3.2.0-nz6vd is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:44:59.890] Jan 11 15:44:59.889: INFO: 25 / 26 pods in namespace 'kube-system' are running and ready (10 seconds elapsed)
I0111 15:44:59.890] Jan 11 15:44:59.889: INFO: expected 9 pod replicas in namespace 'kube-system', 9 are Running and Ready.
I0111 15:44:59.890] Jan 11 15:44:59.889: INFO: POD                       NODE                               PHASE    GRACE  CONDITIONS
I0111 15:44:59.891] Jan 11 15:44:59.889: INFO: fluentd-gcp-v3.2.0-nz6vd  test-9b5ed62f1f-minion-group-7kkt  Running  60s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:44:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:44:55 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:44:55 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:44:03 +0000 UTC  }]
I0111 15:44:59.891] Jan 11 15:44:59.889: INFO: 
I0111 15:45:01.885] Jan 11 15:45:01.885: INFO: The status of Pod etcd-server-events-test-9b5ed62f1f-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:45:01.886] Jan 11 15:45:01.885: INFO: The status of Pod fluentd-gcp-v3.2.0-nz6vd is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:45:01.886] Jan 11 15:45:01.885: INFO: 25 / 27 pods in namespace 'kube-system' are running and ready (12 seconds elapsed)
I0111 15:45:01.886] Jan 11 15:45:01.885: INFO: expected 9 pod replicas in namespace 'kube-system', 9 are Running and Ready.
I0111 15:45:01.886] Jan 11 15:45:01.885: INFO: POD                                        NODE                               PHASE    GRACE  CONDITIONS
I0111 15:45:01.886] Jan 11 15:45:01.885: INFO: etcd-server-events-test-9b5ed62f1f-master  test-9b5ed62f1f-master             Pending         []
I0111 15:45:01.887] Jan 11 15:45:01.885: INFO: fluentd-gcp-v3.2.0-nz6vd                   test-9b5ed62f1f-minion-group-7kkt  Running  60s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:44:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:44:55 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:44:55 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:44:03 +0000 UTC  }]
I0111 15:45:01.887] Jan 11 15:45:01.885: INFO: 
I0111 15:45:03.894] Jan 11 15:45:03.894: INFO: The status of Pod etcd-server-events-test-9b5ed62f1f-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:45:03.894] Jan 11 15:45:03.894: INFO: The status of Pod fluentd-gcp-v3.2.0-jncxw is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:45:03.895] Jan 11 15:45:03.894: INFO: 25 / 27 pods in namespace 'kube-system' are running and ready (14 seconds elapsed)
I0111 15:45:03.895] Jan 11 15:45:03.894: INFO: expected 9 pod replicas in namespace 'kube-system', 9 are Running and Ready.
I0111 15:45:03.895] Jan 11 15:45:03.894: INFO: POD                                        NODE                               PHASE    GRACE  CONDITIONS
I0111 15:45:03.895] Jan 11 15:45:03.894: INFO: etcd-server-events-test-9b5ed62f1f-master  test-9b5ed62f1f-master             Pending         []
I0111 15:45:03.896] Jan 11 15:45:03.894: INFO: fluentd-gcp-v3.2.0-jncxw                   test-9b5ed62f1f-minion-group-7kkt  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:45:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:45:02 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:45:02 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:45:02 +0000 UTC  }]
I0111 15:45:03.896] Jan 11 15:45:03.894: INFO: 
I0111 15:45:05.903] Jan 11 15:45:05.900: INFO: The status of Pod etcd-server-events-test-9b5ed62f1f-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 15:45:05.904] Jan 11 15:45:05.900: INFO: 26 / 27 pods in namespace 'kube-system' are running and ready (16 seconds elapsed)
I0111 15:45:05.904] Jan 11 15:45:05.900: INFO: expected 9 pod replicas in namespace 'kube-system', 9 are Running and Ready.
I0111 15:45:05.904] Jan 11 15:45:05.900: INFO: POD                                        NODE                    PHASE    GRACE  CONDITIONS
I0111 15:45:05.904] Jan 11 15:45:05.900: INFO: etcd-server-events-test-9b5ed62f1f-master  test-9b5ed62f1f-master  Pending         []
I0111 15:45:05.904] Jan 11 15:45:05.900: INFO: 
I0111 15:45:07.904] Jan 11 15:45:07.901: INFO: 27 / 27 pods in namespace 'kube-system' are running and ready (18 seconds elapsed)
... skipping 364 lines ...
I0111 15:47:47.625] Jan 11 15:47:47.625: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.149.6 --kubeconfig=/workspace/.kube/config exec --namespace=services-1269 execpod-252jb -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.0.243.47:80 2>&1 || true; echo; done'
I0111 15:47:48.558] Jan 11 15:47:48.551: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n+ wget -q -T 1 -O - http://10.0.243.47:80\n+ echo\n"
I0111 15:47:48.560] Jan 11 15:47:48.551: INFO: stdout: "service1-j4tk8\nservice1-j4tk8\nservice1-j4tk8\nservice1-8qh97\nservice1-8qh97\nservice1-8qh97\nservice1-qxgcm\nservice1-8qh97\nservice1-j4tk8\nservice1-qxgcm\nservice1-qxgcm\nservice1-qxgcm\nservice1-8qh97\nservice1-8qh97\nservice1-8qh97\nservice1-qxgcm\nservice1-8qh97\nservice1-qxgcm\nservice1-qxgcm\nservice1-j4tk8\nservice1-qxgcm\nservice1-j4tk8\nservice1-8qh97\nservice1-j4tk8\nservice1-qxgcm\nservice1-8qh97\nservice1-8qh97\nservice1-8qh97\nservice1-j4tk8\nservice1-j4tk8\nservice1-j4tk8\nservice1-j4tk8\nservice1-qxgcm\nservice1-qxgcm\nservice1-j4tk8\nservice1-j4tk8\nservice1-8qh97\nservice1-8qh97\nservice1-qxgcm\nservice1-8qh97\nservice1-qxgcm\nservice1-qxgcm\nservice1-qxgcm\nservice1-qxgcm\nservice1-j4tk8\nservice1-j4tk8\nservice1-8qh97\nservice1-qxgcm\nservice1-qxgcm\nservice1-qxgcm\nservice1-qxgcm\nservice1-8qh97\nservice1-8qh97\nservice1-8qh97\nservice1-qxgcm\nservice1-8qh97\nservice1-8qh97\nservice1-qxgcm\nservice1-8qh97\nservice1-8qh97\nservice1-8qh97\nservice1-8qh97\nservice1-8qh97\nservice1-qxgcm\nservice1-qxgcm\nservice1-qxgcm\nservice1-8qh97\nservice1-j4tk8\nservice1-qxgcm\nservice1-j4tk8\nservice1-qxgcm\nservice1-j4tk8\nservice1-j4tk8\nservice1-j4tk8\nservice1-j4tk8\nservice1-8qh97\nservice1-8qh97\nservice1-j4tk8\nservice1-j4tk8\nservice1-8qh97\nservice1-qxgcm\nservice1-j4tk8\nservice1-j4tk8\nservice1-8qh97\nservice1-j4tk8\nservice1-qxgcm\nservice1-qxgcm\nservice1-j4tk8\nservice1-qxgcm\nservice1-qxgcm\nservice1-qxgcm\nservice1-8qh97\nservice1-j4tk8\nservice1-j4tk8\nservice1-qxgcm\nservice1-qxgcm\nservice1-qxgcm\nservice1-j4tk8\nservice1-8qh97\nservice1-8qh97\nservice1-8qh97\nservice1-8qh97\nservice1-qxgcm\nservice1-8qh97\nservice1-j4tk8\nservice1-qxgcm\nservice1-8qh97\nservice1-8qh97\nservice1-j4tk8\nservice1-8qh97\nservice1-j4tk8\nservice1-j4tk8\nservice1-8qh97\nservice1-qxgcm\nservice1-8qh97\nservice1-8qh97\nservice1-qxgcm\nservice1-8qh97\nservice1-j4tk8\nservice1-qxgcm\nservice1-qxgcm\nservice1-qxgcm\nservice1-8qh97\nservice1-8qh97\nservice1-qxgcm\nservice1-j4tk8\nservice1-qxgcm\nservice1-8qh97\nservice1-8qh97\nservice1-qxgcm\nservice1-8qh97\nservice1-j4tk8\nservice1-qxgcm\nservice1-8qh97\nservice1-j4tk8\nservice1-j4tk8\nservice1-8qh97\nservice1-8qh97\nservice1-qxgcm\nservice1-8qh97\nservice1-8qh97\nservice1-8qh97\nservice1-8qh97\nservice1-8qh97\nservice1-j4tk8\nservice1-qxgcm\nservice1-qxgcm\nservice1-qxgcm\nservice1-qxgcm\nservice1-j4tk8\n"
I0111 15:47:48.560] STEP: Deleting pod execpod-252jb in namespace services-1269
I0111 15:47:48.596] STEP: Restarting apiserver
I0111 15:47:48.633] Jan 11 15:47:48.633: INFO: Restarting master via ssh, running: pidof kube-apiserver | xargs sudo kill
I0111 15:47:49.138] Jan 11 15:47:49.138: INFO: Failed to get apiserver's restart count: Get https://34.82.149.6/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver: dial tcp 34.82.149.6:443: connect: connection refused
I0111 15:47:57.416] Jan 11 15:47:57.416: INFO: Waiting for apiserver restart count to increase
I0111 15:48:02.453] Jan 11 15:48:02.452: INFO: Apiserver has restarted.
I0111 15:48:02.453] STEP: Waiting for apiserver to come up by polling /healthz
I0111 15:48:02.486] Jan 11 15:48:02.486: INFO: Creating new exec pod
I0111 15:48:06.597] STEP: verifying service has 3 reachable backends
I0111 15:48:06.597] Jan 11 15:48:06.597: INFO: Executing cmd "set -e; for i in $(seq 1 150); do wget -q --timeout=0.2 --tries=1 -O - http://10.0.243.47:80 2>&1 || true; echo; done" on host 34.82.30.218:22
... skipping 142 lines ...
I0111 15:49:26.592] 
I0111 15:49:26.592]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
I0111 15:49:26.593] ------------------------------
I0111 15:49:26.593] SSSSSSSSSSSSS
I0111 15:49:26.593] ------------------------------
I0111 15:49:26.593] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode 
I0111 15:49:26.593]   should fail to create pod by failing to mount volume [Slow]
I0111 15:49:26.594]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:168
I0111 15:49:26.594] [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
I0111 15:49:26.594]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
I0111 15:49:26.594] [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
I0111 15:49:26.594]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0111 15:49:26.595] STEP: Creating a kubernetes client
I0111 15:49:26.595] Jan 11 15:49:26.589: INFO: >>> kubeConfig: /workspace/.kube/config
I0111 15:49:26.596] STEP: Building a namespace api object, basename volumemode
I0111 15:49:26.707] STEP: Waiting for a default service account to be provisioned in namespace
I0111 15:49:26.748] [It] should fail to create pod by failing to mount volume [Slow]
I0111 15:49:26.748]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:168
I0111 15:49:28.895] Jan 11 15:49:28.895: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.149.6 --kubeconfig=/workspace/.kube/config exec --namespace=volumemode-9734 hostexec-test-9b5ed62f1f-minion-group-zz58 -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l'
I0111 15:49:29.528] Jan 11 15:49:29.527: INFO: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n"
I0111 15:49:29.529] Jan 11 15:49:29.528: INFO: stdout: "0\n"
I0111 15:49:29.529] Jan 11 15:49:29.528: INFO: Requires at least 1 scsi fs localSSD 
I0111 15:49:29.529] [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 7 lines ...
I0111 15:49:37.014] [sig-storage] In-tree Volumes
I0111 15:49:37.014] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
I0111 15:49:37.014]   [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial]
I0111 15:49:37.014]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
I0111 15:49:37.014]     [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
I0111 15:49:37.015]     /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
I0111 15:49:37.015]       should fail to create pod by failing to mount volume [Slow] [It]
I0111 15:49:37.015]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:168
I0111 15:49:37.016] 
I0111 15:49:37.016]       Requires at least 1 scsi fs localSSD 
I0111 15:49:37.016] 
I0111 15:49:37.016]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1720
I0111 15:49:37.016] ------------------------------
... skipping 110 lines ...
I0111 15:50:00.860] Jan 11 15:50:00.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0111 15:50:00.894] STEP: Destroying namespace "pv-9521" for this suite.
I0111 15:50:07.000] Jan 11 15:50:06.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0111 15:50:08.311] Jan 11 15:50:08.310: INFO: namespace pv-9521 deletion completed in 7.416233036s
I0111 15:50:08.311] [AfterEach] [sig-storage] [Serial] Volume metrics
I0111 15:50:08.311]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:76
I0111 15:50:08.311] Jan 11 15:50:08.310: INFO: Failed to get pvc pv-9521/: resource name may not be empty
I0111 15:50:08.312] •SSSSSSSSSSSSS
I0111 15:50:08.312] ------------------------------
I0111 15:50:08.312] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath 
I0111 15:50:08.312]   should support existing single file [LinuxOnly]
I0111 15:50:08.313]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:195
I0111 15:50:08.313] [BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
... skipping 62 lines ...
I0111 15:50:08.572] Jan 11 15:50:08.572: INFO: Waiting up to 20m0s for service "lb-hc-int" to have a LoadBalancer
I0111 15:50:52.768] STEP: modify the health check interval
I0111 15:51:13.283] STEP: restart kube-controller-manager
I0111 15:51:13.284] Jan 11 15:51:13.281: INFO: Restarting controller-manager via ssh, running: pidof kube-controller-manager | xargs sudo kill
I0111 15:51:14.271] Jan 11 15:51:14.267: INFO: ssh prow@34.82.149.6:22: command:   curl http://localhost:10252/healthz
I0111 15:51:14.272] Jan 11 15:51:14.267: INFO: ssh prow@34.82.149.6:22: stdout:    ""
I0111 15:51:14.272] Jan 11 15:51:14.267: INFO: ssh prow@34.82.149.6:22: stderr:    "  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                 Dload  Upload   Total   Spent    Left  Speed\n\r  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (7) Failed to connect to localhost port 10252: Connection refused\n"
I0111 15:51:14.272] Jan 11 15:51:14.267: INFO: ssh prow@34.82.149.6:22: exit code: 7
I0111 15:51:19.773] STEP: health check should be reconciled
I0111 15:51:19.904] Jan 11 15:51:19.904: INFO: hc.CheckIntervalSec = 7
I0111 15:51:40.049] Jan 11 15:51:40.048: INFO: hc.CheckIntervalSec = 7
I0111 15:52:00.041] Jan 11 15:52:00.041: INFO: hc.CheckIntervalSec = 8
I0111 15:52:00.170] [AfterEach] [sig-network] Services
... skipping 626 lines ...
I0111 16:02:14.858] STEP: Destroying namespace "nsdeletetest-9027" for this suite.
I0111 16:02:20.964] Jan 11 16:02:20.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0111 16:02:22.281] Jan 11 16:02:22.281: INFO: namespace nsdeletetest-9027 deletion completed in 7.422940838s
I0111 16:02:22.281] •SSSSSSSSSSSSS
I0111 16:02:22.281] ------------------------------
I0111 16:02:22.282] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath 
I0111 16:02:22.282]   should fail if subpath directory is outside the volume [Slow]
I0111 16:02:22.282]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
I0111 16:02:22.282] [BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
I0111 16:02:22.282]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
I0111 16:02:22.282] Jan 11 16:02:22.281: INFO: Driver local doesn't support DynamicPV -- skipping
I0111 16:02:22.283] [AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
I0111 16:02:22.283]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
... skipping 3 lines ...
I0111 16:02:22.284] [sig-storage] In-tree Volumes
I0111 16:02:22.284] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
I0111 16:02:22.284]   [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial]
I0111 16:02:22.284]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
I0111 16:02:22.284]     [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
I0111 16:02:22.285]     /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
I0111 16:02:22.285]       should fail if subpath directory is outside the volume [Slow] [BeforeEach]
I0111 16:02:22.285]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
I0111 16:02:22.285] 
I0111 16:02:22.285]       Driver local doesn't support DynamicPV -- skipping
I0111 16:02:22.285] 
I0111 16:02:22.286]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
I0111 16:02:22.286] ------------------------------
... skipping 161 lines ...
I0111 16:04:50.841] STEP: Destroying namespace "etcd-failure-3077" for this suite.
I0111 16:05:04.946] Jan 11 16:05:04.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0111 16:05:06.260] Jan 11 16:05:06.260: INFO: namespace etcd-failure-3077 deletion completed in 15.419416756s
I0111 16:05:06.261] •S
I0111 16:05:06.261] ------------------------------
I0111 16:05:06.261] [sig-storage] CSI Volumes CSI Topology test using GCE PD driver [Serial] 
I0111 16:05:06.261]   should fail to schedule a pod with a zone missing from AllowedTopologies; PD is provisioned with delayed volume binding
I0111 16:05:06.261]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:103
I0111 16:05:06.262] [BeforeEach] CSI Topology test using GCE PD driver [Serial]
I0111 16:05:06.262]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0111 16:05:06.262] STEP: Creating a kubernetes client
I0111 16:05:06.262] Jan 11 16:05:06.260: INFO: >>> kubeConfig: /workspace/.kube/config
I0111 16:05:06.262] STEP: Building a namespace api object, basename csitopology
... skipping 27 lines ...
I0111 16:05:07.054] Jan 11 16:05:07.054: INFO: creating *v1.RoleBinding: csitopology-4127/csi-controller-attacher-role-cfg
I0111 16:05:07.090] Jan 11 16:05:07.090: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csitopology-4127
I0111 16:05:07.126] Jan 11 16:05:07.125: INFO: creating *v1.RoleBinding: csitopology-4127/csi-controller-provisioner-role-cfg
I0111 16:05:07.162] Jan 11 16:05:07.162: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csitopology-4127
I0111 16:05:07.197] Jan 11 16:05:07.197: INFO: creating *v1.DaemonSet: csitopology-4127/csi-gce-pd-node
I0111 16:05:07.237] Jan 11 16:05:07.237: INFO: creating *v1.StatefulSet: csitopology-4127/csi-gce-pd-controller
I0111 16:05:07.304] [It] should fail to schedule a pod with a zone missing from AllowedTopologies; PD is provisioned with delayed volume binding
I0111 16:05:07.304]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:103
I0111 16:05:07.386] Jan 11 16:05:07.386: INFO: Requires more than one zone
I0111 16:05:07.386] [AfterEach] CSI Topology test using GCE PD driver [Serial]
I0111 16:05:07.387]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0111 16:05:07.387] Jan 11 16:05:07.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0111 16:05:07.428] STEP: Destroying namespace "csitopology-4127" for this suite.
... skipping 26 lines ...
I0111 16:05:31.675] 
I0111 16:05:31.675] S [SKIPPING] [25.414 seconds]
I0111 16:05:31.675] [sig-storage] CSI Volumes
I0111 16:05:31.676] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
I0111 16:05:31.676]   CSI Topology test using GCE PD driver [Serial]
I0111 16:05:31.676]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:64
I0111 16:05:31.676]     should fail to schedule a pod with a zone missing from AllowedTopologies; PD is provisioned with delayed volume binding [It]
I0111 16:05:31.677]     /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:103
I0111 16:05:31.677] 
I0111 16:05:31.677]     Requires more than one zone
I0111 16:05:31.677] 
I0111 16:05:31.677]     /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:104
I0111 16:05:31.677] ------------------------------
... skipping 898 lines ...
I0111 16:16:12.580] 
I0111 16:16:12.580]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1720
I0111 16:16:12.580] ------------------------------
I0111 16:16:12.580] SSSSSS
I0111 16:16:12.580] ------------------------------
I0111 16:16:12.580] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath 
I0111 16:16:12.581]   should fail if subpath with backstepping is outside the volume [Slow]
I0111 16:16:12.581]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:254
I0111 16:16:12.581] [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
I0111 16:16:12.581]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
I0111 16:16:12.581] Jan 11 16:16:12.577: INFO: Driver pd.csi.storage.gke.io doesn't support PreprovisionedPV -- skipping
I0111 16:16:12.581] [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
I0111 16:16:12.582]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
... skipping 3 lines ...
I0111 16:16:12.582] [sig-storage] CSI Volumes
I0111 16:16:12.582] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
I0111 16:16:12.582]   [Driver: pd.csi.storage.gke.io][Serial]
I0111 16:16:12.583]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:58
I0111 16:16:12.583]     [Testpattern: Pre-provisioned PV (default fs)] subPath
I0111 16:16:12.583]     /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
I0111 16:16:12.583]       should fail if subpath with backstepping is outside the volume [Slow] [BeforeEach]
I0111 16:16:12.583]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:254
I0111 16:16:12.584] 
I0111 16:16:12.584]       Driver pd.csi.storage.gke.io doesn't support PreprovisionedPV -- skipping
I0111 16:16:12.584] 
I0111 16:16:12.584]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
I0111 16:16:12.584] ------------------------------
I0111 16:16:12.584] SSSSSSSS
I0111 16:16:12.584] ------------------------------
I0111 16:16:12.585] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath 
I0111 16:16:12.585]   should fail if subpath file is outside the volume [Slow][LinuxOnly]
I0111 16:16:12.585]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:232
I0111 16:16:12.585] [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
I0111 16:16:12.585]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
I0111 16:16:12.585] [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
I0111 16:16:12.586]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0111 16:16:12.586] STEP: Creating a kubernetes client
I0111 16:16:12.586] Jan 11 16:16:12.578: INFO: >>> kubeConfig: /workspace/.kube/config
I0111 16:16:12.586] STEP: Building a namespace api object, basename provisioning
I0111 16:16:12.686] STEP: Waiting for a default service account to be provisioned in namespace
I0111 16:16:12.720] [It] should fail if subpath file is outside the volume [Slow][LinuxOnly]
I0111 16:16:12.721]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:232
I0111 16:16:14.867] Jan 11 16:16:14.867: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.149.6 --kubeconfig=/workspace/.kube/config exec --namespace=provisioning-6948 hostexec-test-9b5ed62f1f-minion-group-grlk -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l'
I0111 16:16:15.535] Jan 11 16:16:15.535: INFO: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n"
I0111 16:16:15.535] Jan 11 16:16:15.535: INFO: stdout: "0\n"
I0111 16:16:15.536] Jan 11 16:16:15.535: INFO: Requires at least 1 scsi fs localSSD 
I0111 16:16:15.536] [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 7 lines ...
I0111 16:16:23.009] [sig-storage] In-tree Volumes
I0111 16:16:23.009] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
I0111 16:16:23.010]   [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial]
I0111 16:16:23.010]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
I0111 16:16:23.010]     [Testpattern: Pre-provisioned PV (default fs)] subPath
I0111 16:16:23.010]     /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
I0111 16:16:23.010]       should fail if subpath file is outside the volume [Slow][LinuxOnly] [It]
I0111 16:16:23.011]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:232
I0111 16:16:23.011] 
I0111 16:16:23.011]       Requires at least 1 scsi fs localSSD 
I0111 16:16:23.011] 
I0111 16:16:23.011]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1720
I0111 16:16:23.012] ------------------------------
... skipping 167 lines ...
I0111 16:16:33.431] 
I0111 16:16:33.431]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
I0111 16:16:33.432] ------------------------------
I0111 16:16:33.432] SSSS
I0111 16:16:33.432] ------------------------------
I0111 16:16:33.432] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath 
I0111 16:16:33.432]   should fail if subpath file is outside the volume [Slow][LinuxOnly]
I0111 16:16:33.432]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:232
I0111 16:16:33.433] [BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
I0111 16:16:33.433]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
I0111 16:16:33.433] Jan 11 16:16:33.421: INFO: Driver local doesn't support DynamicPV -- skipping
I0111 16:16:33.433] [AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
I0111 16:16:33.433]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
... skipping 3 lines ...
I0111 16:16:33.433] [sig-storage] In-tree Volumes
I0111 16:16:33.434] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
I0111 16:16:33.434]   [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial]
I0111 16:16:33.434]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
I0111 16:16:33.434]     [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
I0111 16:16:33.434]     /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
I0111 16:16:33.434]       should fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach]
I0111 16:16:33.434]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:232
I0111 16:16:33.434] 
I0111 16:16:33.434]       Driver local doesn't support DynamicPV -- skipping
I0111 16:16:33.435] 
I0111 16:16:33.435]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
I0111 16:16:33.435] ------------------------------
... skipping 266 lines ...
I0111 16:19:42.744] STEP: Destroying namespace "provisioning-6885" for this suite.
I0111 16:19:48.850] Jan 11 16:19:48.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0111 16:19:50.165] Jan 11 16:19:50.165: INFO: namespace provisioning-6885 deletion completed in 7.42022314s
I0111 16:19:50.165] •SSSSSSSSSSSSSSS
I0111 16:19:50.165] ------------------------------
I0111 16:19:50.166] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath 
I0111 16:19:50.166]   should fail if subpath with backstepping is outside the volume [Slow]
I0111 16:19:50.166]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:254
I0111 16:19:50.166] [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
I0111 16:19:50.166]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
I0111 16:19:50.166] [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
I0111 16:19:50.167]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0111 16:19:50.167] STEP: Creating a kubernetes client
I0111 16:19:50.167] Jan 11 16:19:50.165: INFO: >>> kubeConfig: /workspace/.kube/config
I0111 16:19:50.167] STEP: Building a namespace api object, basename provisioning
I0111 16:19:50.308] STEP: Waiting for a default service account to be provisioned in namespace
I0111 16:19:50.342] [It] should fail if subpath with backstepping is outside the volume [Slow]
I0111 16:19:50.342]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:254
I0111 16:19:50.342] STEP: deploying csi gce-pd driver
I0111 16:19:50.377] Jan 11 16:19:50.376: INFO: Found CI service account key at /etc/service-account/service-account.json
I0111 16:19:50.377] Jan 11 16:19:50.376: INFO: Running cp [/etc/service-account/service-account.json /tmp/dbd6129b-e31d-4880-ae43-5f73bcbd71c3/cloud-sa.json]
I0111 16:19:50.414] Jan 11 16:19:50.414: INFO: Shredding file /tmp/dbd6129b-e31d-4880-ae43-5f73bcbd71c3/cloud-sa.json
I0111 16:19:50.415] Jan 11 16:19:50.414: INFO: Running shred [--remove /tmp/dbd6129b-e31d-4880-ae43-5f73bcbd71c3/cloud-sa.json]
... skipping 24 lines ...
I0111 16:19:51.179] Jan 11 16:19:51.179: INFO: creating *v1.StatefulSet: provisioning-8860/csi-gce-pd-controller
I0111 16:19:51.239] Jan 11 16:19:51.239: INFO: Test running for native CSI Driver, not checking metrics
I0111 16:19:51.239] Jan 11 16:19:51.239: INFO: Creating resource for dynamic PV
I0111 16:19:51.240] STEP: creating a StorageClass provisioning-8860-pd.csi.storage.gke.io-sccf88q
I0111 16:19:51.423] STEP: creating a claim
I0111 16:19:51.597] STEP: Creating pod pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-bf4f
I0111 16:19:51.640] STEP: Checking for subpath error in container status
I0111 16:20:19.712] Jan 11 16:20:19.712: INFO: Deleting pod "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-bf4f" in namespace "provisioning-8860"
I0111 16:20:19.750] Jan 11 16:20:19.750: INFO: Wait up to 5m0s for pod "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-bf4f" to be fully deleted
I0111 16:20:29.827] STEP: Deleting pod
I0111 16:20:29.828] Jan 11 16:20:29.827: INFO: Deleting pod "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-bf4f" in namespace "provisioning-8860"
I0111 16:20:29.862] STEP: Deleting pvc
I0111 16:20:29.862] Jan 11 16:20:29.861: INFO: Deleting PersistentVolumeClaim "pvc-f78xd"
... skipping 117 lines ...
I0111 16:22:30.654] Jan 11 16:22:30.653: INFO: ssh prow@34.82.30.218:22: stdout:    ""
I0111 16:22:30.654] Jan 11 16:22:30.653: INFO: ssh prow@34.82.30.218:22: stderr:    ""
I0111 16:22:30.654] Jan 11 16:22:30.653: INFO: ssh prow@34.82.30.218:22: exit code: 0
I0111 16:22:30.654] Jan 11 16:22:30.653: INFO: Waiting up to 1m0s for node test-9b5ed62f1f-minion-group-7kkt condition Ready to be true
I0111 16:22:30.688] STEP: Deleting pod
I0111 16:22:30.688] Jan 11 16:22:30.688: INFO: Deleting pod "pod-subpath-test-gcepd-94hx" in namespace "provisioning-9878"
I0111 16:22:32.236] Jan 11 16:22:32.236: INFO: error deleting PD "test-9b5ed62f1f-079ed50d-d2dd-4fec-a8be-1cd4e3bc2bbd": googleapi: Error 400: The disk resource 'projects/gce-cvm-upg-1-3-lat-ctl-skew/zones/us-west1-b/disks/test-9b5ed62f1f-079ed50d-d2dd-4fec-a8be-1cd4e3bc2bbd' is already being used by 'projects/gce-cvm-upg-1-3-lat-ctl-skew/zones/us-west1-b/instances/test-9b5ed62f1f-minion-group-7kkt', resourceInUseByAnotherResource
I0111 16:22:32.237] Jan 11 16:22:32.236: INFO: Couldn't delete PD "test-9b5ed62f1f-079ed50d-d2dd-4fec-a8be-1cd4e3bc2bbd", sleeping 5s: googleapi: Error 400: The disk resource 'projects/gce-cvm-upg-1-3-lat-ctl-skew/zones/us-west1-b/disks/test-9b5ed62f1f-079ed50d-d2dd-4fec-a8be-1cd4e3bc2bbd' is already being used by 'projects/gce-cvm-upg-1-3-lat-ctl-skew/zones/us-west1-b/instances/test-9b5ed62f1f-minion-group-7kkt', resourceInUseByAnotherResource
I0111 16:22:38.639] Jan 11 16:22:38.638: INFO: error deleting PD "test-9b5ed62f1f-079ed50d-d2dd-4fec-a8be-1cd4e3bc2bbd": googleapi: Error 400: The disk resource 'projects/gce-cvm-upg-1-3-lat-ctl-skew/zones/us-west1-b/disks/test-9b5ed62f1f-079ed50d-d2dd-4fec-a8be-1cd4e3bc2bbd' is already being used by 'projects/gce-cvm-upg-1-3-lat-ctl-skew/zones/us-west1-b/instances/test-9b5ed62f1f-minion-group-7kkt', resourceInUseByAnotherResource
I0111 16:22:38.639] Jan 11 16:22:38.638: INFO: Couldn't delete PD "test-9b5ed62f1f-079ed50d-d2dd-4fec-a8be-1cd4e3bc2bbd", sleeping 5s: googleapi: Error 400: The disk resource 'projects/gce-cvm-upg-1-3-lat-ctl-skew/zones/us-west1-b/disks/test-9b5ed62f1f-079ed50d-d2dd-4fec-a8be-1cd4e3bc2bbd' is already being used by 'projects/gce-cvm-upg-1-3-lat-ctl-skew/zones/us-west1-b/instances/test-9b5ed62f1f-minion-group-7kkt', resourceInUseByAnotherResource
I0111 16:22:45.804] Jan 11 16:22:45.804: INFO: Successfully deleted PD "test-9b5ed62f1f-079ed50d-d2dd-4fec-a8be-1cd4e3bc2bbd".
I0111 16:22:45.804] Jan 11 16:22:45.804: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
I0111 16:22:45.805] [AfterEach] [Testpattern: Inline-volume (default fs)] subPath
I0111 16:22:45.805]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0111 16:22:45.805] Jan 11 16:22:45.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0111 16:22:45.839] STEP: Destroying namespace "provisioning-9878" for this suite.
... skipping 25 lines ...
I0111 16:22:53.305] 
I0111 16:22:53.305]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
I0111 16:22:53.305] ------------------------------
I0111 16:22:53.305] SSSSSSSSS
I0111 16:22:53.306] ------------------------------
I0111 16:22:53.306] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode 
I0111 16:22:53.306]   should fail in binding dynamic provisioned PV to PVC
I0111 16:22:53.306]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:239
I0111 16:22:53.306] [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
I0111 16:22:53.307]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
I0111 16:22:53.307] Jan 11 16:22:53.302: INFO: Driver local doesn't support DynamicPV -- skipping
I0111 16:22:53.307] [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
I0111 16:22:53.307]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
... skipping 3 lines ...
I0111 16:22:53.308] [sig-storage] In-tree Volumes
I0111 16:22:53.308] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
I0111 16:22:53.309]   [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial]
I0111 16:22:53.309]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
I0111 16:22:53.309]     [Testpattern: Dynamic PV (block volmode)] volumeMode
I0111 16:22:53.309]     /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
I0111 16:22:53.309]       should fail in binding dynamic provisioned PV to PVC [BeforeEach]
I0111 16:22:53.309]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:239
I0111 16:22:53.309] 
I0111 16:22:53.310]       Driver local doesn't support DynamicPV -- skipping
I0111 16:22:53.310] 
I0111 16:22:53.310]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
I0111 16:22:53.310] ------------------------------
... skipping 819 lines ...
I0111 16:32:24.940] Jan 11 16:32:24.940: INFO: Waiting for ready nodes 3, current ready 3, not ready nodes 1
I0111 16:32:44.976] Jan 11 16:32:44.976: INFO: Condition Ready of node test-9b5ed62f1f-minion-group-7kkt is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2020-01-11 16:31:21 +0000 UTC} {node.kubernetes.io/unreachable  NoExecute 2020-01-11 16:31:26 +0000 UTC}]. Failure
I0111 16:32:44.976] Jan 11 16:32:44.976: INFO: Waiting for ready nodes 3, current ready 3, not ready nodes 1
I0111 16:33:05.013] Jan 11 16:33:05.013: INFO: Cluster has reached the desired number of ready nodes 3
I0111 16:33:05.013] STEP: waiting for system pods to successfully restart
I0111 16:33:05.013] Jan 11 16:33:05.013: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
I0111 16:33:05.123] Jan 11 16:33:05.122: INFO: The status of Pod fluentd-gcp-v3.2.0-jncxw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 16:33:05.123] Jan 11 16:33:05.122: INFO: The status of Pod kube-proxy-test-9b5ed62f1f-minion-group-7kkt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 16:33:05.124] Jan 11 16:33:05.122: INFO: The status of Pod metadata-proxy-v0.1-sg6vj is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0111 16:33:05.124] Jan 11 16:33:05.122: INFO: 28 / 31 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
I0111 16:33:05.124] Jan 11 16:33:05.123: INFO: expected 9 pod replicas in namespace 'kube-system', 9 are Running and Ready.
I0111 16:33:05.124] Jan 11 16:33:05.123: INFO: POD                                           NODE                               PHASE    GRACE  CONDITIONS
I0111 16:33:05.125] Jan 11 16:33:05.123: INFO: fluentd-gcp-v3.2.0-jncxw                      test-9b5ed62f1f-minion-group-7kkt  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:45:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 16:31:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:45:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:45:02 +0000 UTC  }]
I0111 16:33:05.125] Jan 11 16:33:05.123: INFO: kube-proxy-test-9b5ed62f1f-minion-group-7kkt  test-9b5ed62f1f-minion-group-7kkt  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 16:25:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 16:31:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 16:25:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 16:25:10 +0000 UTC  }]
I0111 16:33:05.126] Jan 11 16:33:05.123: INFO: metadata-proxy-v0.1-sg6vj                     test-9b5ed62f1f-minion-group-7kkt  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:44:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 16:31:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:44:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 15:44:03 +0000 UTC  }]
... skipping 1727 lines ...
I0111 16:54:10.350] STEP: Destroying namespace "taint-multiple-pods-1728" for this suite.
I0111 16:54:32.466] Jan 11 16:54:32.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0111 16:54:33.896] Jan 11 16:54:33.896: INFO: namespace taint-multiple-pods-1728 deletion completed in 23.546383258s
I0111 16:54:33.896] •SSS
I0111 16:54:33.896] ------------------------------
I0111 16:54:33.897] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath 
I0111 16:54:33.897]   should fail if subpath file is outside the volume [Slow][LinuxOnly]
I0111 16:54:33.897]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:232
I0111 16:54:33.897] [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
I0111 16:54:33.897]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
I0111 16:54:33.898] [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
I0111 16:54:33.898]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0111 16:54:33.898] STEP: Creating a kubernetes client
I0111 16:54:33.898] Jan 11 16:54:33.896: INFO: >>> kubeConfig: /workspace/.kube/config
I0111 16:54:33.899] STEP: Building a namespace api object, basename provisioning
I0111 16:54:34.012] STEP: Waiting for a default service account to be provisioned in namespace
I0111 16:54:34.048] [It] should fail if subpath file is outside the volume [Slow][LinuxOnly]
I0111 16:54:34.049]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:232
I0111 16:54:34.049] STEP: deploying csi gce-pd driver
I0111 16:54:34.086] Jan 11 16:54:34.086: INFO: Found CI service account key at /etc/service-account/service-account.json
I0111 16:54:34.086] Jan 11 16:54:34.086: INFO: Running cp [/etc/service-account/service-account.json /tmp/3683ab5c-5799-4afd-a9f1-6a0fa99303f9/cloud-sa.json]
I0111 16:54:34.127] Jan 11 16:54:34.127: INFO: Shredding file /tmp/3683ab5c-5799-4afd-a9f1-6a0fa99303f9/cloud-sa.json
I0111 16:54:34.127] Jan 11 16:54:34.127: INFO: Running shred [--remove /tmp/3683ab5c-5799-4afd-a9f1-6a0fa99303f9/cloud-sa.json]
... skipping 24 lines ...
I0111 16:54:34.957] Jan 11 16:54:34.956: INFO: creating *v1.StatefulSet: provisioning-9867/csi-gce-pd-controller
I0111 16:54:35.029] Jan 11 16:54:35.028: INFO: Test running for native CSI Driver, not checking metrics
I0111 16:54:35.029] Jan 11 16:54:35.028: INFO: Creating resource for dynamic PV
I0111 16:54:35.029] STEP: creating a StorageClass provisioning-9867-pd.csi.storage.gke.io-scms4ht
I0111 16:54:35.107] STEP: creating a claim
I0111 16:54:35.184] STEP: Creating pod pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-c6bn
I0111 16:54:35.226] STEP: Checking for subpath error in container status
I0111 16:55:07.302] Jan 11 16:55:07.302: INFO: Deleting pod "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-c6bn" in namespace "provisioning-9867"
I0111 16:55:07.345] Jan 11 16:55:07.345: INFO: Wait up to 5m0s for pod "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-c6bn" to be fully deleted
I0111 16:55:19.420] STEP: Deleting pod
I0111 16:55:19.421] Jan 11 16:55:19.420: INFO: Deleting pod "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-c6bn" in namespace "provisioning-9867"
I0111 16:55:19.457] STEP: Deleting pvc
I0111 16:55:19.458] Jan 11 16:55:19.457: INFO: Deleting PersistentVolumeClaim "pvc-9fsk4"
... skipping 151 lines ...
I0111 16:57:55.348] STEP: Destroying namespace "provisioning-2576" for this suite.
I0111 16:58:01.463] Jan 11 16:58:01.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0111 16:58:02.903] Jan 11 16:58:02.903: INFO: namespace provisioning-2576 deletion completed in 7.555037831s
I0111 16:58:02.942] •SSSSSSSSSSSSSSSSSS
I0111 16:58:02.942] ------------------------------
I0111 16:58:02.942] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath 
I0111 16:58:02.942]   should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
I0111 16:58:02.942]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:243
I0111 16:58:02.942] [BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
I0111 16:58:02.943]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
I0111 16:58:02.943] Jan 11 16:58:02.903: INFO: Driver pd.csi.storage.gke.io doesn't support ntfs -- skipping
I0111 16:58:02.943] [AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
I0111 16:58:02.943]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
... skipping 3 lines ...
I0111 16:58:02.944] [sig-storage] CSI Volumes
I0111 16:58:02.944] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
I0111 16:58:02.944]   [Driver: pd.csi.storage.gke.io][Serial]
I0111 16:58:02.944]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:58
I0111 16:58:02.945]     [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
I0111 16:58:02.945]     /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
I0111 16:58:02.945]       should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
I0111 16:58:02.945]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:243
I0111 16:58:02.945] 
I0111 16:58:02.946]       Driver pd.csi.storage.gke.io doesn't support ntfs -- skipping
I0111 16:58:02.946] 
I0111 16:58:02.946]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:147
I0111 16:58:02.946] ------------------------------
... skipping 58 lines ...
I0111 16:58:59.071] Jan 11 16:58:59.071: INFO: GCE PD "test-9b5ed62f1f-2bb2acff-0898-4550-9477-7f4f35fd60d2" appears to have successfully detached from "test-9b5ed62f1f-minion-group-mp04".
I0111 16:58:59.071] STEP: defer: cleaning up PD-RW test env
I0111 16:58:59.072] Jan 11 16:58:59.071: INFO: defer cleanup errors can usually be ignored
I0111 16:58:59.072] STEP: defer: delete host0Pod
I0111 16:58:59.121] STEP: defer: detach and delete PDs
I0111 16:58:59.121] Jan 11 16:58:59.121: INFO: Detaching GCE PD "test-9b5ed62f1f-2bb2acff-0898-4550-9477-7f4f35fd60d2" from node "test-9b5ed62f1f-minion-group-mp04".
I0111 16:59:00.993] Jan 11 16:59:00.992: INFO: Error detaching PD "test-9b5ed62f1f-2bb2acff-0898-4550-9477-7f4f35fd60d2": googleapi: Error 400: INVALID_USAGE - No attached disk found with device name 'test-9b5ed62f1f-2bb2acff-0898-4550-9477-7f4f35fd60d2'
I0111 16:59:00.993] STEP: Waiting for PD "test-9b5ed62f1f-2bb2acff-0898-4550-9477-7f4f35fd60d2" to detach from "test-9b5ed62f1f-minion-group-mp04"
I0111 16:59:00.993] Jan 11 16:59:00.992: INFO: Waiting for GCE PD "test-9b5ed62f1f-2bb2acff-0898-4550-9477-7f4f35fd60d2" to detach from node "test-9b5ed62f1f-minion-group-mp04".
I0111 16:59:01.193] Jan 11 16:59:01.192: INFO: GCE PD "test-9b5ed62f1f-2bb2acff-0898-4550-9477-7f4f35fd60d2" appears to have successfully detached from "test-9b5ed62f1f-minion-group-mp04".
I0111 16:59:01.193] STEP: Deleting PD "test-9b5ed62f1f-2bb2acff-0898-4550-9477-7f4f35fd60d2"
I0111 16:59:03.420] Jan 11 16:59:03.419: INFO: Successfully deleted PD "test-9b5ed62f1f-2bb2acff-0898-4550-9477-7f4f35fd60d2".
I0111 16:59:03.420] [AfterEach] [sig-storage] Pod Disks
... skipping 203 lines ...
I0111 17:01:00.702] STEP: Trying to apply a random label on the found node.
I0111 17:01:00.784] STEP: verifying the node has the label failure-domain.beta.kubernetes.io/zone equivalence-e2e-test
I0111 17:01:00.821] STEP: Trying to schedule RC with Pod Affinity should success.
I0111 17:01:06.003] STEP: Remove node failure domain label
I0111 17:01:06.004] STEP: removing the label failure-domain.beta.kubernetes.io/zone off the node test-9b5ed62f1f-minion-group-n9s1
I0111 17:01:06.086] STEP: verifying the node doesn't have the label failure-domain.beta.kubernetes.io/zone
I0111 17:01:06.124] STEP: Trying to schedule another equivalent Pod should fail due to node label has been removed.
I0111 17:01:06.162] STEP: Considering event: 
I0111 17:01:06.163] Type = [Normal], Name = [with-label-05cbfa12-8939-4add-87ac-99953c7b569f.15e8e3ba4510f27d], Reason = [Scheduled], Message = [Successfully assigned equivalence-cache-4170/with-label-05cbfa12-8939-4add-87ac-99953c7b569f to test-9b5ed62f1f-minion-group-n9s1]
I0111 17:01:06.163] STEP: Considering event: 
I0111 17:01:06.163] Type = [Normal], Name = [with-label-05cbfa12-8939-4add-87ac-99953c7b569f.15e8e3ba75fc1860], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
I0111 17:01:06.163] STEP: Considering event: 
I0111 17:01:06.164] Type = [Normal], Name = [with-label-05cbfa12-8939-4add-87ac-99953c7b569f.15e8e3ba79deb812], Reason = [Created], Message = [Created container with-label-05cbfa12-8939-4add-87ac-99953c7b569f]
... skipping 855 lines ...
I0111 17:05:55.302] 
I0111 17:05:55.302]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
I0111 17:05:55.302] ------------------------------
I0111 17:05:55.302] SS
I0111 17:05:55.303] ------------------------------
I0111 17:05:55.303] [sig-apps] Daemon set [Serial] 
I0111 17:05:55.303]   should retry creating failed daemon pods [Conformance]
I0111 17:05:55.303]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
I0111 17:05:55.303] [BeforeEach] [sig-apps] Daemon set [Serial]
I0111 17:05:55.303]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0111 17:05:55.304] STEP: Creating a kubernetes client
I0111 17:05:55.304] Jan 11 17:05:55.289: INFO: >>> kubeConfig: /workspace/.kube/config
I0111 17:05:55.304] STEP: Building a namespace api object, basename daemonsets
I0111 17:05:55.406] STEP: Waiting for a default service account to be provisioned in namespace
I0111 17:05:55.444] [BeforeEach] [sig-apps] Daemon set [Serial]
I0111 17:05:55.445]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
I0111 17:05:55.680] [It] should retry creating failed daemon pods [Conformance]
I0111 17:05:55.680]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
I0111 17:05:55.680] STEP: Creating a simple DaemonSet "daemon-set"
I0111 17:05:55.722] STEP: Check that daemon pods launch on every node of the cluster.
I0111 17:05:55.816] Jan 11 17:05:55.816: INFO: DaemonSet pods can't tolerate node test-9b5ed62f1f-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0111 17:05:55.878] Jan 11 17:05:55.878: INFO: Number of nodes with available pods: 0
I0111 17:05:55.878] Jan 11 17:05:55.878: INFO: Node test-9b5ed62f1f-minion-group-mp04 is running more than one daemon pod
... skipping 3 lines ...
I0111 17:05:57.917] Jan 11 17:05:57.916: INFO: DaemonSet pods can't tolerate node test-9b5ed62f1f-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0111 17:05:57.955] Jan 11 17:05:57.954: INFO: Number of nodes with available pods: 2
I0111 17:05:57.955] Jan 11 17:05:57.954: INFO: Node test-9b5ed62f1f-minion-group-n9s1 is running more than one daemon pod
I0111 17:05:58.916] Jan 11 17:05:58.916: INFO: DaemonSet pods can't tolerate node test-9b5ed62f1f-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0111 17:05:58.954] Jan 11 17:05:58.954: INFO: Number of nodes with available pods: 3
I0111 17:05:58.954] Jan 11 17:05:58.954: INFO: Number of running nodes: 3, number of available pods: 3
I0111 17:05:58.992] STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
I0111 17:05:59.126] Jan 11 17:05:59.126: INFO: DaemonSet pods can't tolerate node test-9b5ed62f1f-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0111 17:05:59.164] Jan 11 17:05:59.164: INFO: Number of nodes with available pods: 2
I0111 17:05:59.164] Jan 11 17:05:59.164: INFO: Node test-9b5ed62f1f-minion-group-n9s1 is running more than one daemon pod
I0111 17:06:00.203] Jan 11 17:06:00.202: INFO: DaemonSet pods can't tolerate node test-9b5ed62f1f-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0111 17:06:00.241] Jan 11 17:06:00.240: INFO: Number of nodes with available pods: 2
I0111 17:06:00.241] Jan 11 17:06:00.240: INFO: Node test-9b5ed62f1f-minion-group-n9s1 is running more than one daemon pod
I0111 17:06:01.203] Jan 11 17:06:01.202: INFO: DaemonSet pods can't tolerate node test-9b5ed62f1f-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0111 17:06:01.240] Jan 11 17:06:01.240: INFO: Number of nodes with available pods: 3
I0111 17:06:01.240] Jan 11 17:06:01.240: INFO: Number of running nodes: 3, number of available pods: 3
I0111 17:06:01.241] STEP: Wait for the failed daemon pod to be completely deleted.
I0111 17:06:01.277] [AfterEach] [sig-apps] Daemon set [Serial]
I0111 17:06:01.278]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
I0111 17:06:01.315] STEP: Deleting DaemonSet "daemon-set"
I0111 17:06:01.316] STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9892, will wait for the garbage collector to delete the pods
I0111 17:06:01.445] Jan 11 17:06:01.445: INFO: Deleting DaemonSet.extensions daemon-set took: 42.054528ms
I0111 17:06:02.045] Jan 11 17:06:02.045: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.352427ms
... skipping 521 lines ...
I0111 17:11:13.239] 
I0111 17:11:13.239]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
I0111 17:11:13.239] ------------------------------
I0111 17:11:13.239] SSSSSSSSSS
I0111 17:11:13.239] ------------------------------
I0111 17:11:13.239] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath 
I0111 17:11:13.240]   should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
I0111 17:11:13.240]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:243
I0111 17:11:13.240] [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
I0111 17:11:13.240]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
I0111 17:11:13.240] [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
I0111 17:11:13.240]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0111 17:11:13.241] STEP: Creating a kubernetes client
I0111 17:11:13.241] Jan 11 17:11:13.233: INFO: >>> kubeConfig: /workspace/.kube/config
I0111 17:11:13.241] STEP: Building a namespace api object, basename provisioning
I0111 17:11:13.354] STEP: Waiting for a default service account to be provisioned in namespace
I0111 17:11:13.393] [It] should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
I0111 17:11:13.393]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:243
I0111 17:11:15.555] Jan 11 17:11:15.554: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.149.6 --kubeconfig=/workspace/.kube/config exec --namespace=provisioning-955 hostexec-test-9b5ed62f1f-minion-group-zz58 -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l'
I0111 17:11:16.183] Jan 11 17:11:16.183: INFO: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n"
I0111 17:11:16.183] Jan 11 17:11:16.183: INFO: stdout: "0\n"
I0111 17:11:16.184] Jan 11 17:11:16.183: INFO: Requires at least 1 scsi fs localSSD 
I0111 17:11:16.184] [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 7 lines ...
I0111 17:11:23.787] [sig-storage] In-tree Volumes
I0111 17:11:23.788] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
I0111 17:11:23.788]   [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial]
I0111 17:11:23.788]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
I0111 17:11:23.788]     [Testpattern: Pre-provisioned PV (default fs)] subPath
I0111 17:11:23.788]     /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
I0111 17:11:23.789]       should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [It]
I0111 17:11:23.789]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:243
I0111 17:11:23.789] 
I0111 17:11:23.789]       Requires at least 1 scsi fs localSSD 
I0111 17:11:23.789] 
I0111 17:11:23.790]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1720
I0111 17:11:23.790] ------------------------------
... skipping 156 lines ...
I0111 17:11:52.940] STEP: Destroying namespace "provisioning-5940" for this suite.
I0111 17:12:15.057] Jan 11 17:12:15.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0111 17:12:16.620] Jan 11 17:12:16.620: INFO: namespace provisioning-5940 deletion completed in 23.679993106s
I0111 17:12:16.621] •SSSSSSSS
I0111 17:12:16.621] ------------------------------
I0111 17:12:16.621] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath 
I0111 17:12:16.621]   should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
I0111 17:12:16.622]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:243
I0111 17:12:16.622] [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
I0111 17:12:16.622]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
I0111 17:12:16.622] Jan 11 17:12:16.620: INFO: Driver pd.csi.storage.gke.io doesn't support InlineVolume -- skipping
I0111 17:12:16.622] [AfterEach] [Testpattern: Inline-volume (default fs)] subPath
I0111 17:12:16.622]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
... skipping 3 lines ...
I0111 17:12:16.623] [sig-storage] CSI Volumes
I0111 17:12:16.623] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
I0111 17:12:16.623]   [Driver: pd.csi.storage.gke.io][Serial]
I0111 17:12:16.624]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:58
I0111 17:12:16.624]     [Testpattern: Inline-volume (default fs)] subPath
I0111 17:12:16.624]     /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
I0111 17:12:16.624]       should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
I0111 17:12:16.624]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:243
I0111 17:12:16.624] 
I0111 17:12:16.625]       Driver pd.csi.storage.gke.io doesn't support InlineVolume -- skipping
I0111 17:12:16.625] 
I0111 17:12:16.625]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
I0111 17:12:16.625] ------------------------------
... skipping 140 lines ...
I0111 17:12:27.483] 
I0111 17:12:27.483]       Driver pd.csi.storage.gke.io doesn't support InlineVolume -- skipping
I0111 17:12:27.484] 
I0111 17:12:27.484]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
I0111 17:12:27.484] ------------------------------
I0111 17:12:27.484] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath 
I0111 17:12:27.484]   should fail if subpath directory is outside the volume [Slow]
I0111 17:12:27.485]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
I0111 17:12:27.485] [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
I0111 17:12:27.485]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
I0111 17:12:27.485] [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
I0111 17:12:27.485]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0111 17:12:27.485] STEP: Creating a kubernetes client
I0111 17:12:27.486] Jan 11 17:12:27.473: INFO: >>> kubeConfig: /workspace/.kube/config
I0111 17:12:27.486] STEP: Building a namespace api object, basename provisioning
I0111 17:12:27.589] STEP: Waiting for a default service account to be provisioned in namespace
I0111 17:12:27.626] [It] should fail if subpath directory is outside the volume [Slow]
I0111 17:12:27.626]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
I0111 17:12:29.788] Jan 11 17:12:29.788: INFO: Running '/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.149.6 --kubeconfig=/workspace/.kube/config exec --namespace=provisioning-6858 hostexec-test-9b5ed62f1f-minion-group-zz58 -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l'
I0111 17:12:30.452] Jan 11 17:12:30.452: INFO: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n"
I0111 17:12:30.453] Jan 11 17:12:30.452: INFO: stdout: "0\n"
I0111 17:12:30.453] Jan 11 17:12:30.452: INFO: Requires at least 1 scsi fs localSSD 
I0111 17:12:30.453] [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 7 lines ...
I0111 17:12:38.040] [sig-storage] In-tree Volumes
I0111 17:12:38.040] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
I0111 17:12:38.040]   [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial]
I0111 17:12:38.041]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
I0111 17:12:38.041]     [Testpattern: Pre-provisioned PV (default fs)] subPath
I0111 17:12:38.041]     /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
I0111 17:12:38.041]       should fail if subpath directory is outside the volume [Slow] [It]
I0111 17:12:38.041]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
I0111 17:12:38.041] 
I0111 17:12:38.042]       Requires at least 1 scsi fs localSSD 
I0111 17:12:38.042] 
I0111 17:12:38.042]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1720
I0111 17:12:38.042] ------------------------------
... skipping 980 lines ...
I0111 17:27:03.436] Jan 11 17:27:03.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0111 17:27:03.481] STEP: Destroying namespace "pv-4945" for this suite.
I0111 17:27:17.602] Jan 11 17:27:17.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0111 17:27:19.042] Jan 11 17:27:19.042: INFO: namespace pv-4945 deletion completed in 15.561598704s
I0111 17:27:19.043] [AfterEach] [sig-storage] [Serial] Volume metrics
I0111 17:27:19.043]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:76
I0111 17:27:19.080] Jan 11 17:27:19.079: INFO: Failed to get pvc pv-4945/pvc-zkvbz: persistentvolumeclaims "pvc-zkvbz" not found
I0111 17:27:19.080] •SSSSSSSSSSSSSSSSSSSSSSSS
I0111 17:27:19.080] ------------------------------
I0111 17:27:19.080] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath 
I0111 17:27:19.080]   should support existing single file [LinuxOnly]
I0111 17:27:19.081]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:195
I0111 17:27:19.081] [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 121 lines ...
I0111 17:27:19.099] 
I0111 17:27:19.099]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:150
I0111 17:27:19.099] ------------------------------
I0111 17:27:19.099] SSSSSSSSSSSSSSSSSSSSSSSSSS
I0111 17:27:19.100] ------------------------------
I0111 17:27:19.100] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (default fs)] subPath 
I0111 17:27:19.100]   should fail if subpath file is outside the volume [Slow][LinuxOnly]
I0111 17:27:19.100]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:232
I0111 17:27:19.100] [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
I0111 17:27:19.100]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
I0111 17:27:19.101] Jan 11 17:27:19.085: INFO: Driver local doesn't support InlineVolume -- skipping
I0111 17:27:19.101] [AfterEach] [Testpattern: Inline-volume (default fs)] subPath
I0111 17:27:19.101]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
... skipping 3 lines ...
I0111 17:27:19.101] [sig-storage] In-tree Volumes
I0111 17:27:19.102] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
I0111 17:27:19.102]   [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial]
I0111 17:27:19.102]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
I0111 17:27:19.102]     [Testpattern: Inline-volume (default fs)] subPath
I0111 17:27:19.102]     /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
I0111 17:27:19.102]       should fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach]
I0111 17:27:19.103]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:232
I0111 17:27:19.103] 
I0111 17:27:19.103]       Driver local doesn't support InlineVolume -- skipping
I0111 17:27:19.103] 
I0111 17:27:19.103]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
I0111 17:27:19.103] ------------------------------
... skipping 364 lines ...
I0111 17:32:31.297] STEP: Destroying namespace "provisioning-6480" for this suite.
I0111 17:32:37.412] Jan 11 17:32:37.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0111 17:32:38.846] Jan 11 17:32:38.845: INFO: namespace provisioning-6480 deletion completed in 7.548840155s
I0111 17:32:38.846] •SS
I0111 17:32:38.846] ------------------------------
I0111 17:32:38.846] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath 
I0111 17:32:38.847]   should fail if subpath with backstepping is outside the volume [Slow]
I0111 17:32:38.847]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:254
I0111 17:32:38.847] [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
I0111 17:32:38.847]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
I0111 17:32:38.847] Jan 11 17:32:38.846: INFO: Driver pd.csi.storage.gke.io doesn't support InlineVolume -- skipping
I0111 17:32:38.847] [AfterEach] [Testpattern: Inline-volume (default fs)] subPath
I0111 17:32:38.848]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
... skipping 3 lines ...
I0111 17:32:38.848] [sig-storage] CSI Volumes
I0111 17:32:38.848] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
I0111 17:32:38.849]   [Driver: pd.csi.storage.gke.io][Serial]
I0111 17:32:38.849]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:58
I0111 17:32:38.849]     [Testpattern: Inline-volume (default fs)] subPath
I0111 17:32:38.849]     /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
I0111 17:32:38.849]       should fail if subpath with backstepping is outside the volume [Slow] [BeforeEach]
I0111 17:32:38.850]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:254
I0111 17:32:38.850] 
I0111 17:32:38.850]       Driver pd.csi.storage.gke.io doesn't support InlineVolume -- skipping
I0111 17:32:38.850] 
I0111 17:32:38.850]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
I0111 17:32:38.850] ------------------------------
... skipping 138 lines ...
I0111 17:33:09.469] Jan 11 17:33:09.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0111 17:33:09.507] STEP: Destroying namespace "pv-207" for this suite.
I0111 17:33:15.621] Jan 11 17:33:15.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0111 17:33:17.091] Jan 11 17:33:17.091: INFO: namespace pv-207 deletion completed in 7.58416282s
I0111 17:33:17.091] [AfterEach] [sig-storage] [Serial] Volume metrics
I0111 17:33:17.092]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:76
I0111 17:33:17.092] Jan 11 17:33:17.091: INFO: Failed to get pvc pv-207/: resource name may not be empty
I0111 17:33:17.092] •SSSSSSSSS
I0111 17:33:17.092] ------------------------------
I0111 17:33:17.093] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] 
I0111 17:33:17.093]   should access to two volumes with the same volume mode and retain data across pod recreation on different node
I0111 17:33:17.093]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:147
I0111 17:33:17.093] [BeforeEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
... skipping 602 lines ...
I0111 17:37:46.484] 
I0111 17:37:46.484]       Driver gluster doesn't support DynamicPV -- skipping
I0111 17:37:46.484] 
I0111 17:37:46.484]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
I0111 17:37:46.484] ------------------------------
I0111 17:37:46.485] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] subPath 
I0111 17:37:46.485]   should fail if subpath directory is outside the volume [Slow]
I0111 17:37:46.485]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
I0111 17:37:46.485] [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
I0111 17:37:46.485]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
I0111 17:37:46.486] Jan 11 17:37:46.480: INFO: Driver local doesn't support DynamicPV -- skipping
I0111 17:37:46.486] [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
I0111 17:37:46.486]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
... skipping 3 lines ...
I0111 17:37:46.487] [sig-storage] In-tree Volumes
I0111 17:37:46.487] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
I0111 17:37:46.487]   [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial]
I0111 17:37:46.488]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
I0111 17:37:46.488]     [Testpattern: Dynamic PV (default fs)] subPath
I0111 17:37:46.488]     /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
I0111 17:37:46.488]       should fail if subpath directory is outside the volume [Slow] [BeforeEach]
I0111 17:37:46.488]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
I0111 17:37:46.489] 
I0111 17:37:46.489]       Driver local doesn't support DynamicPV -- skipping
I0111 17:37:46.489] 
I0111 17:37:46.489]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
I0111 17:37:46.489] ------------------------------
... skipping 449 lines ...
I0111 17:42:22.775] 
I0111 17:42:22.775]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:79
I0111 17:42:22.775] ------------------------------
I0111 17:42:22.775] SSSSSS
I0111 17:42:22.776] ------------------------------
I0111 17:42:22.776] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (default fs)] subPath 
I0111 17:42:22.776]   should fail if subpath with backstepping is outside the volume [Slow]
I0111 17:42:22.776]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:254
I0111 17:42:22.776] [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
I0111 17:42:22.776]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
I0111 17:42:22.776] Jan 11 17:42:22.762: INFO: Driver local doesn't support InlineVolume -- skipping
I0111 17:42:22.777] [AfterEach] [Testpattern: Inline-volume (default fs)] subPath
I0111 17:42:22.777]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
... skipping 3 lines ...
I0111 17:42:22.777] [sig-storage] In-tree Volumes
I0111 17:42:22.777] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
I0111 17:42:22.777]   [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial]
I0111 17:42:22.777]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
I0111 17:42:22.778]     [Testpattern: Inline-volume (default fs)] subPath
I0111 17:42:22.778]     /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
I0111 17:42:22.778]       should fail if subpath with backstepping is outside the volume [Slow] [BeforeEach]
I0111 17:42:22.778]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:254
I0111 17:42:22.778] 
I0111 17:42:22.778]       Driver local doesn't support InlineVolume -- skipping
I0111 17:42:22.778] 
I0111 17:42:22.778]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
I0111 17:42:22.778] ------------------------------
... skipping 250 lines ...
I0111 17:46:43.775] Jan 11 17:46:43.773: INFO: GCE PD "test-9b5ed62f1f-379991b3-04e5-432c-9f60-e8762f24cf18" appears to have successfully detached from "test-9b5ed62f1f-minion-group-lgg3".
I0111 17:46:43.776] STEP: defer: cleaning up PD-RW test env
I0111 17:46:43.776] Jan 11 17:46:43.773: INFO: defer cleanup errors can usually be ignored
I0111 17:46:43.776] STEP: defer: delete host0Pod
I0111 17:46:43.811] STEP: defer: detach and delete PDs
I0111 17:46:43.812] Jan 11 17:46:43.811: INFO: Detaching GCE PD "test-9b5ed62f1f-379991b3-04e5-432c-9f60-e8762f24cf18" from node "test-9b5ed62f1f-minion-group-lgg3".
I0111 17:46:45.689] Jan 11 17:46:45.689: INFO: Error detaching PD "test-9b5ed62f1f-379991b3-04e5-432c-9f60-e8762f24cf18": googleapi: Error 400: INVALID_USAGE - No attached disk found with device name 'test-9b5ed62f1f-379991b3-04e5-432c-9f60-e8762f24cf18'
I0111 17:46:45.690] STEP: Waiting for PD "test-9b5ed62f1f-379991b3-04e5-432c-9f60-e8762f24cf18" to detach from "test-9b5ed62f1f-minion-group-lgg3"
I0111 17:46:45.690] Jan 11 17:46:45.689: INFO: Waiting for GCE PD "test-9b5ed62f1f-379991b3-04e5-432c-9f60-e8762f24cf18" to detach from node "test-9b5ed62f1f-minion-group-lgg3".
I0111 17:46:45.906] Jan 11 17:46:45.906: INFO: GCE PD "test-9b5ed62f1f-379991b3-04e5-432c-9f60-e8762f24cf18" appears to have successfully detached from "test-9b5ed62f1f-minion-group-lgg3".
I0111 17:46:45.907] STEP: Deleting PD "test-9b5ed62f1f-379991b3-04e5-432c-9f60-e8762f24cf18"
I0111 17:46:48.216] Jan 11 17:46:48.216: INFO: Successfully deleted PD "test-9b5ed62f1f-379991b3-04e5-432c-9f60-e8762f24cf18".
I0111 17:46:48.216] STEP: defer: verify the number of ready nodes
... skipping 5 lines ...
I0111 17:46:48.375] STEP: Destroying namespace "pod-disks-6949" for this suite.
I0111 17:46:54.493] Jan 11 17:46:54.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0111 17:46:55.926] Jan 11 17:46:55.926: INFO: namespace pod-disks-6949 deletion completed in 7.552200114s
I0111 17:46:55.927] •SSSSSSSSSSSSSSSSSSSSSSSS
I0111 17:46:55.927] ------------------------------
I0111 17:46:55.927] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath 
I0111 17:46:55.927]   should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
I0111 17:46:55.928]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:243
I0111 17:46:55.928] [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
I0111 17:46:55.928]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
I0111 17:46:55.928] [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
I0111 17:46:55.929]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0111 17:46:55.929] STEP: Creating a kubernetes client
I0111 17:46:55.929] Jan 11 17:46:55.926: INFO: >>> kubeConfig: /workspace/.kube/config
I0111 17:46:55.929] STEP: Building a namespace api object, basename provisioning
I0111 17:46:56.076] STEP: Waiting for a default service account to be provisioned in namespace
I0111 17:46:56.113] [It] should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
I0111 17:46:56.113]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:243
I0111 17:46:56.113] STEP: deploying csi gce-pd driver
I0111 17:46:56.150] Jan 11 17:46:56.150: INFO: Found CI service account key at /etc/service-account/service-account.json
I0111 17:46:56.150] Jan 11 17:46:56.150: INFO: Running cp [/etc/service-account/service-account.json /tmp/05765b2e-f4e8-477b-b481-d59bcfc765a3/cloud-sa.json]
I0111 17:46:56.198] Jan 11 17:46:56.198: INFO: Shredding file /tmp/05765b2e-f4e8-477b-b481-d59bcfc765a3/cloud-sa.json
I0111 17:46:56.198] Jan 11 17:46:56.198: INFO: Running shred [--remove /tmp/05765b2e-f4e8-477b-b481-d59bcfc765a3/cloud-sa.json]
... skipping 24 lines ...
I0111 17:46:57.032] Jan 11 17:46:57.032: INFO: creating *v1.StatefulSet: provisioning-2902/csi-gce-pd-controller
I0111 17:46:57.098] Jan 11 17:46:57.097: INFO: Test running for native CSI Driver, not checking metrics
I0111 17:46:57.098] Jan 11 17:46:57.097: INFO: Creating resource for dynamic PV
I0111 17:46:57.098] STEP: creating a StorageClass provisioning-2902-pd.csi.storage.gke.io-sc88cc6
I0111 17:46:57.153] STEP: creating a claim
I0111 17:46:57.269] STEP: Creating pod pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-ntv9
I0111 17:46:57.314] STEP: Checking for subpath error in container status
I0111 17:47:33.392] Jan 11 17:47:33.391: INFO: Deleting pod "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-ntv9" in namespace "provisioning-2902"
I0111 17:47:33.438] Jan 11 17:47:33.437: INFO: Wait up to 5m0s for pod "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-ntv9" to be fully deleted
I0111 17:47:41.516] STEP: Deleting pod
I0111 17:47:41.516] Jan 11 17:47:41.515: INFO: Deleting pod "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-ntv9" in namespace "provisioning-2902"
I0111 17:47:41.557] STEP: Deleting pvc
I0111 17:47:41.557] Jan 11 17:47:41.556: INFO: Deleting PersistentVolumeClaim "pvc-czv26"
... skipping 61 lines ...
I0111 17:48:26.988] Jan 11 17:48:26.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0111 17:48:27.026] STEP: Destroying namespace "pv-3915" for this suite.
I0111 17:48:41.140] Jan 11 17:48:41.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0111 17:48:42.636] Jan 11 17:48:42.635: INFO: namespace pv-3915 deletion completed in 15.609915714s
I0111 17:48:42.636] [AfterEach] [sig-storage] [Serial] Volume metrics
I0111 17:48:42.636]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:76
I0111 17:48:42.673] Jan 11 17:48:42.673: INFO: Failed to get pvc pv-3915/pvc-jnwhx: persistentvolumeclaims "pvc-jnwhx" not found
I0111 17:48:42.673] •SSSSSSSSS
I0111 17:48:42.674] ------------------------------
I0111 17:48:42.674] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath 
I0111 17:48:42.674]   should support existing directories when readOnly specified in the volumeSource
I0111 17:48:42.674]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:370
I0111 17:48:42.674] [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 536 lines ...
I0111 17:51:56.360] 
I0111 17:51:56.361]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go: