This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 2862 succeeded
Started2019-09-12 16:59
Elapsed29m1s
Revision
Buildergke-prow-ssd-pool-1a225945-n4mt
Refs master:b3c4bdea
81703:9b34fb0b
82119:008f4e2d
82283:ab2b20f1
82600:84070403
82602:75888077
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/82d86144-48a9-4961-a6e9-0b41c3f1bb03/targets/test'}}
pod9b34af14-d57e-11e9-ad08-968d9a0b984c
resultstorehttps://source.cloud.google.com/results/invocations/82d86144-48a9-4961-a6e9-0b41c3f1bb03/targets/test
infra-commitef701cede
pod9b34af14-d57e-11e9-ad08-968d9a0b984c
repok8s.io/kubernetes
repo-commit5ee641167d11d2092ef0530581bc1b61601e4e9c
repos{u'k8s.io/kubernetes': u'master:b3c4bdea496c0e808ad761d6c387fcd6838dea99,81703:9b34fb0b627196a9d6b6d15025ff6dbd27c34365,82119:008f4e2ddcdf8484cccaaa17ad7ecec5788787f0,82283:ab2b20f1bde084ddaa9a60f9330996d52407d8e7,82600:84070403dad60237c7798978e5bcf8b8329ed790,82602:75888077d34b1312d7a9547565f2e9d16819b52b'}

Test Failures


k8s.io/kubernetes/test/integration/volumescheduling TestVolumeProvision 25s

go test -v k8s.io/kubernetes/test/integration/volumescheduling -run TestVolumeProvision$
=== RUN   TestVolumeProvision
W0912 17:27:51.917506  111116 feature_gate.go:208] Setting GA feature gate PersistentLocalVolumes=true. It will be removed in a future release.
I0912 17:27:51.917513  111116 feature_gate.go:216] feature gates: &{map[PersistentLocalVolumes:true]}
W0912 17:27:51.918138  111116 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0912 17:27:51.918167  111116 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0912 17:27:51.918177  111116 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0912 17:27:51.918186  111116 master.go:259] Using reconciler: 
I0912 17:27:51.919502  111116 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.919759  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.919794  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.920338  111116 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0912 17:27:51.920370  111116 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.920608  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.920633  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.920708  111116 reflector.go:158] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0912 17:27:51.921460  111116 store.go:1342] Monitoring events count at <storage-prefix>//events
I0912 17:27:51.921566  111116 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.921515  111116 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0912 17:27:51.922409  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.922498  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.922909  111116 watch_cache.go:405] Replace watchCache (rev: 58689) 
I0912 17:27:51.923859  111116 watch_cache.go:405] Replace watchCache (rev: 58689) 
I0912 17:27:51.926699  111116 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0912 17:27:51.926807  111116 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.927772  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.927860  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.926943  111116 reflector.go:158] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0912 17:27:51.928710  111116 watch_cache.go:405] Replace watchCache (rev: 58689) 
I0912 17:27:51.929860  111116 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0912 17:27:51.929975  111116 reflector.go:158] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0912 17:27:51.930546  111116 watch_cache.go:405] Replace watchCache (rev: 58689) 
I0912 17:27:51.931535  111116 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.931709  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.931793  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.933986  111116 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0912 17:27:51.934529  111116 reflector.go:158] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0912 17:27:51.934662  111116 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.935154  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.935218  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.935560  111116 watch_cache.go:405] Replace watchCache (rev: 58690) 
I0912 17:27:51.936524  111116 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0912 17:27:51.936555  111116 reflector.go:158] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0912 17:27:51.936688  111116 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.936777  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.936791  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.937292  111116 watch_cache.go:405] Replace watchCache (rev: 58690) 
I0912 17:27:51.937647  111116 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0912 17:27:51.937680  111116 reflector.go:158] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0912 17:27:51.937793  111116 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.937904  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.937993  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.938765  111116 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0912 17:27:51.938776  111116 watch_cache.go:405] Replace watchCache (rev: 58690) 
I0912 17:27:51.938877  111116 reflector.go:158] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0912 17:27:51.938894  111116 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.939025  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.939043  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.939856  111116 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0912 17:27:51.939958  111116 reflector.go:158] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0912 17:27:51.940330  111116 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.940737  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.940858  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.940753  111116 watch_cache.go:405] Replace watchCache (rev: 58690) 
I0912 17:27:51.941256  111116 watch_cache.go:405] Replace watchCache (rev: 58690) 
I0912 17:27:51.942875  111116 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0912 17:27:51.942958  111116 reflector.go:158] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0912 17:27:51.943032  111116 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.943178  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.943206  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.943813  111116 watch_cache.go:405] Replace watchCache (rev: 58690) 
I0912 17:27:51.944090  111116 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0912 17:27:51.944320  111116 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.944529  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.944620  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.944745  111116 reflector.go:158] Listing and watching *core.Node from storage/cacher.go:/minions
I0912 17:27:51.945524  111116 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0912 17:27:51.945639  111116 watch_cache.go:405] Replace watchCache (rev: 58690) 
I0912 17:27:51.945669  111116 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.945770  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.945796  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.945865  111116 reflector.go:158] Listing and watching *core.Pod from storage/cacher.go:/pods
I0912 17:27:51.946613  111116 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0912 17:27:51.946654  111116 reflector.go:158] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0912 17:27:51.947043  111116 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.947178  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.947197  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.947402  111116 watch_cache.go:405] Replace watchCache (rev: 58690) 
I0912 17:27:51.947804  111116 watch_cache.go:405] Replace watchCache (rev: 58690) 
I0912 17:27:51.948064  111116 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0912 17:27:51.948101  111116 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.948245  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.948262  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.948317  111116 reflector.go:158] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0912 17:27:51.949093  111116 watch_cache.go:405] Replace watchCache (rev: 58690) 
I0912 17:27:51.949483  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.949513  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.950380  111116 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.950512  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.950526  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.951002  111116 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0912 17:27:51.951026  111116 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0912 17:27:51.951313  111116 reflector.go:158] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0912 17:27:51.951338  111116 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.951455  111116 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.951830  111116 watch_cache.go:405] Replace watchCache (rev: 58690) 
I0912 17:27:51.952034  111116 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.952579  111116 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.953484  111116 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.954066  111116 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.954555  111116 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.954747  111116 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.955039  111116 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.955495  111116 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.955956  111116 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.956334  111116 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.957483  111116 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.957819  111116 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.958314  111116 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.958568  111116 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.959128  111116 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.959351  111116 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.959531  111116 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.959696  111116 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.959949  111116 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.960145  111116 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.960326  111116 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.960963  111116 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.961291  111116 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.962020  111116 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.962623  111116 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.962942  111116 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.963221  111116 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.963816  111116 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.964123  111116 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.964712  111116 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.965288  111116 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.965816  111116 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.966443  111116 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.966752  111116 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.966952  111116 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0912 17:27:51.967047  111116 master.go:461] Enabling API group "authentication.k8s.io".
I0912 17:27:51.967110  111116 master.go:461] Enabling API group "authorization.k8s.io".
I0912 17:27:51.967257  111116 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.967413  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.967502  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.968252  111116 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0912 17:27:51.968324  111116 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0912 17:27:51.968595  111116 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.968809  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.969101  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.969048  111116 watch_cache.go:405] Replace watchCache (rev: 58690) 
I0912 17:27:51.970297  111116 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0912 17:27:51.970601  111116 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0912 17:27:51.970796  111116 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.971024  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.971132  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.971364  111116 watch_cache.go:405] Replace watchCache (rev: 58690) 
I0912 17:27:51.971946  111116 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0912 17:27:51.972070  111116 master.go:461] Enabling API group "autoscaling".
I0912 17:27:51.971987  111116 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0912 17:27:51.972548  111116 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.972735  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.972831  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.973134  111116 watch_cache.go:405] Replace watchCache (rev: 58690) 
I0912 17:27:51.974045  111116 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0912 17:27:51.974072  111116 reflector.go:158] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0912 17:27:51.974175  111116 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.974314  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.974334  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.974786  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:51.975018  111116 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0912 17:27:51.975039  111116 master.go:461] Enabling API group "batch".
I0912 17:27:51.975076  111116 reflector.go:158] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0912 17:27:51.975161  111116 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.975286  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.975300  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.975989  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:51.976124  111116 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0912 17:27:51.976232  111116 master.go:461] Enabling API group "certificates.k8s.io".
I0912 17:27:51.976169  111116 reflector.go:158] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0912 17:27:51.976452  111116 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.976689  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.976702  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.977155  111116 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0912 17:27:51.977275  111116 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.977291  111116 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0912 17:27:51.977386  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.977408  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.977668  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:51.978146  111116 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0912 17:27:51.978168  111116 master.go:461] Enabling API group "coordination.k8s.io".
I0912 17:27:51.978180  111116 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0912 17:27:51.978210  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:51.978318  111116 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.978438  111116 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0912 17:27:51.978446  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.978515  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.979160  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:51.979194  111116 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0912 17:27:51.979218  111116 master.go:461] Enabling API group "extensions".
I0912 17:27:51.979240  111116 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0912 17:27:51.979363  111116 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.979505  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.979531  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.980201  111116 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0912 17:27:51.980222  111116 reflector.go:158] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0912 17:27:51.980398  111116 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.980443  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:51.980521  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.981155  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.981322  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:51.981983  111116 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0912 17:27:51.982004  111116 master.go:461] Enabling API group "networking.k8s.io".
I0912 17:27:51.982065  111116 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0912 17:27:51.982069  111116 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.982198  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.982211  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.982967  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:51.983221  111116 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0912 17:27:51.983241  111116 master.go:461] Enabling API group "node.k8s.io".
I0912 17:27:51.983242  111116 reflector.go:158] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0912 17:27:51.983371  111116 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.983492  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.983505  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.984728  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:51.985290  111116 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0912 17:27:51.985378  111116 reflector.go:158] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0912 17:27:51.985398  111116 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.985497  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.985515  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.986193  111116 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0912 17:27:51.986219  111116 master.go:461] Enabling API group "policy".
I0912 17:27:51.986264  111116 reflector.go:158] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0912 17:27:51.986299  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:51.986254  111116 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.986491  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.986521  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.987378  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:51.987541  111116 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0912 17:27:51.987576  111116 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0912 17:27:51.987642  111116 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.987766  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.987793  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.988648  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:51.988896  111116 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0912 17:27:51.988987  111116 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0912 17:27:51.989151  111116 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.989359  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.989482  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.989808  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:51.990356  111116 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0912 17:27:51.990401  111116 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0912 17:27:51.990624  111116 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.990754  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.990807  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.991525  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:51.991632  111116 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0912 17:27:51.991674  111116 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.991718  111116 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0912 17:27:51.991831  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.991855  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.992538  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:51.993097  111116 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0912 17:27:51.993161  111116 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0912 17:27:51.993435  111116 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.993710  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.993805  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.994424  111116 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0912 17:27:51.994465  111116 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.994513  111116 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0912 17:27:51.994624  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.994649  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.994954  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:51.995169  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:51.995974  111116 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0912 17:27:51.996047  111116 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0912 17:27:51.996121  111116 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.996249  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.996268  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.996648  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:51.996900  111116 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0912 17:27:51.996954  111116 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0912 17:27:51.996988  111116 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0912 17:27:51.998187  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:51.998941  111116 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:51.999064  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:51.999082  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:51.999590  111116 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0912 17:27:51.999731  111116 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0912 17:27:51.999743  111116 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.000093  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:52.000319  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:52.000565  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:52.001532  111116 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0912 17:27:52.001634  111116 master.go:461] Enabling API group "scheduling.k8s.io".
I0912 17:27:52.001849  111116 master.go:450] Skipping disabled API group "settings.k8s.io".
I0912 17:27:52.002109  111116 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.002405  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:52.002546  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:52.001651  111116 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0912 17:27:52.003571  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:52.003657  111116 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0912 17:27:52.003676  111116 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0912 17:27:52.004198  111116 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.004488  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:52.004673  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:52.005102  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:52.005760  111116 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0912 17:27:52.005802  111116 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.005820  111116 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0912 17:27:52.006028  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:52.006047  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:52.007026  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:52.007647  111116 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0912 17:27:52.007761  111116 reflector.go:158] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0912 17:27:52.008048  111116 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.008759  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:52.008888  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:52.008573  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:52.009838  111116 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0912 17:27:52.010164  111116 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.010270  111116 reflector.go:158] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0912 17:27:52.010348  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:52.010519  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:52.011257  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:52.011398  111116 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0912 17:27:52.011472  111116 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0912 17:27:52.011520  111116 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.011620  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:52.011643  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:52.012060  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:52.012278  111116 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0912 17:27:52.012306  111116 master.go:461] Enabling API group "storage.k8s.io".
I0912 17:27:52.012335  111116 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0912 17:27:52.012568  111116 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.012706  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:52.012733  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:52.013162  111116 watch_cache.go:405] Replace watchCache (rev: 58691) 
I0912 17:27:52.014607  111116 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0912 17:27:52.014673  111116 reflector.go:158] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0912 17:27:52.014772  111116 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.014900  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:52.014953  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:52.015484  111116 watch_cache.go:405] Replace watchCache (rev: 58692) 
I0912 17:27:52.015710  111116 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0912 17:27:52.015777  111116 reflector.go:158] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0912 17:27:52.015847  111116 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.015991  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:52.016018  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:52.016576  111116 watch_cache.go:405] Replace watchCache (rev: 58692) 
I0912 17:27:52.016997  111116 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0912 17:27:52.017068  111116 reflector.go:158] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0912 17:27:52.017148  111116 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.017337  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:52.017363  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:52.018283  111116 watch_cache.go:405] Replace watchCache (rev: 58692) 
I0912 17:27:52.018377  111116 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0912 17:27:52.018426  111116 reflector.go:158] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0912 17:27:52.018609  111116 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.018889  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:52.018988  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:52.019193  111116 watch_cache.go:405] Replace watchCache (rev: 58692) 
I0912 17:27:52.019757  111116 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0912 17:27:52.019782  111116 master.go:461] Enabling API group "apps".
I0912 17:27:52.019807  111116 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.019841  111116 reflector.go:158] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0912 17:27:52.019959  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:52.019986  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:52.020742  111116 watch_cache.go:405] Replace watchCache (rev: 58692) 
I0912 17:27:52.021046  111116 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0912 17:27:52.021087  111116 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.021265  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:52.021276  111116 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0912 17:27:52.021292  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:52.021986  111116 watch_cache.go:405] Replace watchCache (rev: 58692) 
I0912 17:27:52.022457  111116 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0912 17:27:52.022494  111116 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.022518  111116 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0912 17:27:52.022588  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:52.022603  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:52.023276  111116 watch_cache.go:405] Replace watchCache (rev: 58692) 
I0912 17:27:52.023393  111116 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0912 17:27:52.023454  111116 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0912 17:27:52.023486  111116 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.023874  111116 watch_cache.go:405] Replace watchCache (rev: 58692) 
I0912 17:27:52.024321  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:52.024364  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:52.024993  111116 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0912 17:27:52.025015  111116 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0912 17:27:52.025036  111116 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.025106  111116 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0912 17:27:52.025249  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:52.025269  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:52.025634  111116 watch_cache.go:405] Replace watchCache (rev: 58692) 
I0912 17:27:52.026185  111116 store.go:1342] Monitoring events count at <storage-prefix>//events
I0912 17:27:52.026208  111116 master.go:461] Enabling API group "events.k8s.io".
I0912 17:27:52.026255  111116 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0912 17:27:52.026366  111116 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.026508  111116 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.026676  111116 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.026745  111116 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.026817  111116 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.026881  111116 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.027002  111116 watch_cache.go:405] Replace watchCache (rev: 58692) 
I0912 17:27:52.027013  111116 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.027075  111116 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.027303  111116 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.027584  111116 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.028558  111116 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.028891  111116 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.029618  111116 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.030004  111116 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.030613  111116 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.030999  111116 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.031836  111116 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.032318  111116 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.033288  111116 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.033543  111116 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0912 17:27:52.033603  111116 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0912 17:27:52.034310  111116 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.034449  111116 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.034702  111116 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.035350  111116 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.036228  111116 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.036859  111116 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.037121  111116 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.038055  111116 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.038850  111116 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.039058  111116 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.039829  111116 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0912 17:27:52.040042  111116 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0912 17:27:52.041171  111116 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.041678  111116 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.042336  111116 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.042994  111116 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.043400  111116 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.044148  111116 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.044951  111116 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.045638  111116 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.046258  111116 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.047082  111116 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.047936  111116 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0912 17:27:52.048062  111116 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0912 17:27:52.048822  111116 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.049666  111116 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0912 17:27:52.049779  111116 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0912 17:27:52.050635  111116 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.051288  111116 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.051637  111116 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.052457  111116 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.054887  111116 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.055452  111116 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.056149  111116 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0912 17:27:52.056275  111116 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0912 17:27:52.057427  111116 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.058335  111116 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.058681  111116 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.059562  111116 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.059948  111116 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.060311  111116 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.061144  111116 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.061579  111116 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.061895  111116 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.062777  111116 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.063095  111116 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.063342  111116 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0912 17:27:52.063479  111116 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0912 17:27:52.063555  111116 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0912 17:27:52.064282  111116 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.064944  111116 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.065661  111116 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.066298  111116 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.067111  111116 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"786db7ea-de2d-4c3a-a56f-63266d05494a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 17:27:52.070340  111116 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 17:27:52.070451  111116 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0912 17:27:52.070504  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:52.070570  111116 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 17:27:52.070618  111116 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 17:27:52.070669  111116 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 17:27:52.070838  111116 httplog.go:90] GET /healthz: (609.732µs) 0 [Go-http-client/1.1 127.0.0.1:41776]
I0912 17:27:52.071728  111116 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.492107ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:52.075878  111116 httplog.go:90] GET /api/v1/services: (2.549609ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:52.079703  111116 httplog.go:90] GET /api/v1/services: (879.05µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:52.081859  111116 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 17:27:52.081889  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:52.081900  111116 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 17:27:52.081908  111116 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 17:27:52.081988  111116 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 17:27:52.082020  111116 httplog.go:90] GET /healthz: (274.786µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:52.083543  111116 httplog.go:90] GET /api/v1/services: (1.041434ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:52.085368  111116 httplog.go:90] GET /api/v1/services: (2.218243ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41780]
I0912 17:27:52.085685  111116 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.696422ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I0912 17:27:52.087626  111116 httplog.go:90] POST /api/v1/namespaces: (1.548936ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41780]
I0912 17:27:52.089393  111116 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.282762ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41780]
I0912 17:27:52.091408  111116 httplog.go:90] POST /api/v1/namespaces: (1.611258ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41780]
I0912 17:27:52.092722  111116 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.01722ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41780]
I0912 17:27:52.094536  111116 httplog.go:90] POST /api/v1/namespaces: (1.435493ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41780]
I0912 17:27:52.171721  111116 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 17:27:52.171759  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:52.171771  111116 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 17:27:52.171779  111116 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 17:27:52.171798  111116 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 17:27:52.171844  111116 httplog.go:90] GET /healthz: (249.303µs) 0 [Go-http-client/1.1 127.0.0.1:41780]
I0912 17:27:52.182798  111116 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 17:27:52.182839  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:52.182852  111116 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 17:27:52.182861  111116 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 17:27:52.182869  111116 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 17:27:52.182904  111116 httplog.go:90] GET /healthz: (247.192µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41780]
I0912 17:27:52.271681  111116 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 17:27:52.271834  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:52.271883  111116 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 17:27:52.271929  111116 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 17:27:52.271982  111116 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 17:27:52.272126  111116 httplog.go:90] GET /healthz: (594.206µs) 0 [Go-http-client/1.1 127.0.0.1:41780]
I0912 17:27:52.282810  111116 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 17:27:52.282974  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:52.283042  111116 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 17:27:52.283076  111116 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 17:27:52.283116  111116 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 17:27:52.283270  111116 httplog.go:90] GET /healthz: (602.51µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41780]
I0912 17:27:52.371587  111116 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 17:27:52.371624  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:52.371634  111116 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 17:27:52.371640  111116 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 17:27:52.371647  111116 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 17:27:52.371692  111116 httplog.go:90] GET /healthz: (229.878µs) 0 [Go-http-client/1.1 127.0.0.1:41780]
I0912 17:27:52.382955  111116 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 17:27:52.383104  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:52.383164  111116 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 17:27:52.383260  111116 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 17:27:52.383327  111116 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 17:27:52.383487  111116 httplog.go:90] GET /healthz: (730.506µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41780]
I0912 17:27:52.471657  111116 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 17:27:52.471702  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:52.471728  111116 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 17:27:52.471738  111116 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 17:27:52.471747  111116 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 17:27:52.471794  111116 httplog.go:90] GET /healthz: (321.417µs) 0 [Go-http-client/1.1 127.0.0.1:41780]
I0912 17:27:52.482849  111116 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 17:27:52.482892  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:52.482907  111116 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 17:27:52.482944  111116 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 17:27:52.482960  111116 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 17:27:52.483007  111116 httplog.go:90] GET /healthz: (324.256µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41780]
I0912 17:27:52.571615  111116 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 17:27:52.571657  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:52.571666  111116 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 17:27:52.571672  111116 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 17:27:52.571678  111116 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 17:27:52.571716  111116 httplog.go:90] GET /healthz: (238.731µs) 0 [Go-http-client/1.1 127.0.0.1:41780]
I0912 17:27:52.582748  111116 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 17:27:52.582786  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:52.582798  111116 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 17:27:52.582806  111116 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 17:27:52.582812  111116 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 17:27:52.582837  111116 httplog.go:90] GET /healthz: (223.398µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41780]
I0912 17:27:52.671574  111116 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 17:27:52.671603  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:52.671612  111116 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 17:27:52.671618  111116 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 17:27:52.671624  111116 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 17:27:52.671646  111116 httplog.go:90] GET /healthz: (197.74µs) 0 [Go-http-client/1.1 127.0.0.1:41780]
I0912 17:27:52.682709  111116 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 17:27:52.682738  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:52.682747  111116 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 17:27:52.682754  111116 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 17:27:52.682762  111116 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 17:27:52.682795  111116 httplog.go:90] GET /healthz: (208.674µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41780]
I0912 17:27:52.771643  111116 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 17:27:52.771687  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:52.771703  111116 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 17:27:52.771714  111116 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 17:27:52.771722  111116 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 17:27:52.771762  111116 httplog.go:90] GET /healthz: (280.006µs) 0 [Go-http-client/1.1 127.0.0.1:41780]
I0912 17:27:52.782791  111116 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 17:27:52.782826  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:52.782838  111116 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 17:27:52.782846  111116 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 17:27:52.782854  111116 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 17:27:52.782913  111116 httplog.go:90] GET /healthz: (250.218µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41780]
I0912 17:27:52.871829  111116 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 17:27:52.871867  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:52.871877  111116 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 17:27:52.871887  111116 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 17:27:52.871895  111116 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 17:27:52.871961  111116 httplog.go:90] GET /healthz: (254.614µs) 0 [Go-http-client/1.1 127.0.0.1:41780]
I0912 17:27:52.882778  111116 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 17:27:52.882814  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:52.882824  111116 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 17:27:52.882830  111116 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 17:27:52.882836  111116 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 17:27:52.882872  111116 httplog.go:90] GET /healthz: (228.669µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41780]
I0912 17:27:52.918312  111116 client.go:361] parsed scheme: "endpoint"
I0912 17:27:52.918405  111116 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 17:27:52.972832  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:52.972862  111116 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 17:27:52.972871  111116 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 17:27:52.972878  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 17:27:52.972953  111116 httplog.go:90] GET /healthz: (1.44871ms) 0 [Go-http-client/1.1 127.0.0.1:41780]
I0912 17:27:52.983447  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:52.983476  111116 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 17:27:52.983486  111116 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 17:27:52.983494  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 17:27:52.983531  111116 httplog.go:90] GET /healthz: (977.9µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41780]
I0912 17:27:53.071320  111116 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.126945ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.071417  111116 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.217594ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41780]
I0912 17:27:53.073105  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:53.073132  111116 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.315275ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.073136  111116 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 17:27:53.073158  111116 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 17:27:53.073173  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 17:27:53.073204  111116 httplog.go:90] GET /healthz: (829.588µs) 0 [Go-http-client/1.1 127.0.0.1:41802]
I0912 17:27:53.073240  111116 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.503185ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41780]
I0912 17:27:53.073420  111116 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0912 17:27:53.073529  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (933.686µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41804]
I0912 17:27:53.074456  111116 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (886.007µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.074624  111116 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.191947ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:53.074840  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (671.943µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41804]
I0912 17:27:53.076217  111116 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.442061ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.076392  111116 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0912 17:27:53.076406  111116 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0912 17:27:53.076809  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (682.529µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41804]
I0912 17:27:53.077809  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (596.516µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.078758  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (600.51µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.079546  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (539.077µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.080500  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (706.634µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.081468  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (705.333µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.082494  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (701.001µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.083207  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:53.083230  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:53.083263  111116 httplog.go:90] GET /healthz: (749.841µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:53.083608  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (788.558µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.085339  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.373288ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.085548  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0912 17:27:53.086431  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (703.218µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.088308  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.510728ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.088629  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0912 17:27:53.089522  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (703.909µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.091271  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.239663ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.091464  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0912 17:27:53.092307  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (723.586µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.093858  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.292335ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.094044  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0912 17:27:53.094859  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (659.314µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.096537  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.174836ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.096685  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0912 17:27:53.097522  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (672.63µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.099013  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.147848ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.099157  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0912 17:27:53.100065  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (741.865µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.101704  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.32561ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.101972  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0912 17:27:53.102944  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (703.803µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.104592  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.275754ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.104757  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0912 17:27:53.105794  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (876.111µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.107877  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.627472ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.108161  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0912 17:27:53.109106  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (690.271µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.110616  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.139617ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.110817  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0912 17:27:53.111586  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (577.592µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.113189  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.323316ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.113431  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0912 17:27:53.114269  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (645.04µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.116173  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.49937ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.116396  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0912 17:27:53.117302  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (728.917µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.118819  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.226578ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.119032  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0912 17:27:53.119792  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (591.709µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.121391  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.261111ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.121591  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0912 17:27:53.122552  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (811.173µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.124029  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.161725ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.124241  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0912 17:27:53.125146  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (732.799µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.126731  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.261513ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.126910  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0912 17:27:53.127737  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (663.183µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.129258  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.17701ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.129463  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0912 17:27:53.130295  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (666.271µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.131994  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.395025ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.132216  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0912 17:27:53.134069  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.667418ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.135847  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.369045ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.136096  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0912 17:27:53.137171  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (814.973µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.139303  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.630296ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.139514  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0912 17:27:53.140427  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (736.663µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.142360  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.495335ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.142488  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0912 17:27:53.143272  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (684.58µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.145135  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.256471ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.145333  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0912 17:27:53.146577  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.011995ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.148259  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.281918ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.148454  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0912 17:27:53.149369  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (747.125µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.150895  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.158769ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.151201  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0912 17:27:53.153844  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (2.493673ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.156491  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.508458ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.156718  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0912 17:27:53.157617  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (730.484µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.159658  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.559026ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.159962  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0912 17:27:53.160779  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (611.244µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.163138  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.967631ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.163431  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0912 17:27:53.164550  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (835.405µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.166416  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.452301ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.166996  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0912 17:27:53.167970  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (752.324µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.169741  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.233742ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.169988  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0912 17:27:53.170800  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (556.824µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.171872  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:53.171890  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:53.171913  111116 httplog.go:90] GET /healthz: (600.837µs) 0 [Go-http-client/1.1 127.0.0.1:41802]
I0912 17:27:53.172859  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.669663ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.173120  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0912 17:27:53.175170  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.857264ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.176911  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.377096ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.177160  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0912 17:27:53.178193  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (863.434µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.180051  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.395954ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.180254  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0912 17:27:53.181256  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (844.614µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.183178  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:53.183203  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:53.183228  111116 httplog.go:90] GET /healthz: (653.357µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:53.183297  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.61583ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.183651  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0912 17:27:53.184752  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (845.05µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.186363  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.266779ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.186595  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0912 17:27:53.187516  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (708.256µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.189095  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.222112ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.189312  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0912 17:27:53.190282  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (734.213µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.191657  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.028232ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.191977  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0912 17:27:53.193261  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (993.628µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.195087  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.33294ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.195380  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0912 17:27:53.196411  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (826.073µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.198143  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.316289ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.198320  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0912 17:27:53.199545  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (970.613µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.201307  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.394067ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.201497  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0912 17:27:53.202494  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (779.822µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.204748  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.400764ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.205050  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0912 17:27:53.205958  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (722.7µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.207564  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.258895ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.207910  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0912 17:27:53.208832  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (688.075µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.210395  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.172118ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.210586  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0912 17:27:53.211463  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (697.155µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.213147  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.367785ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.213359  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0912 17:27:53.214135  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (633.541µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.216180  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.563757ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.216412  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0912 17:27:53.217563  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (966.479µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.219162  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.280845ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.219403  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0912 17:27:53.220535  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (769.595µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.222482  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.288483ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.222766  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0912 17:27:53.223803  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (727.708µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.225614  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.341589ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.225848  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0912 17:27:53.226737  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (670.281µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.228277  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.124871ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.228522  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0912 17:27:53.229481  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (778.278µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.231047  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.247057ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.231298  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0912 17:27:53.233292  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (914.009µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.254403  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.767318ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.254701  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0912 17:27:53.272494  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:53.272528  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:53.272563  111116 httplog.go:90] GET /healthz: (1.072059ms) 0 [Go-http-client/1.1 127.0.0.1:41778]
I0912 17:27:53.273945  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (958.352µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.283298  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:53.283329  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:53.283365  111116 httplog.go:90] GET /healthz: (771.859µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.294389  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.815565ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.294710  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0912 17:27:53.313883  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.120814ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.334829  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.28542ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.335403  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0912 17:27:53.354338  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.803632ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.372462  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:53.372509  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:53.372548  111116 httplog.go:90] GET /healthz: (1.101813ms) 0 [Go-http-client/1.1 127.0.0.1:41778]
I0912 17:27:53.374439  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.838055ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:53.374781  111116 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0912 17:27:53.383494  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:53.383610  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:53.383767  111116 httplog.go:90] GET /healthz: (1.230419ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:53.395240  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (2.7254ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:53.416287  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.671057ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:53.416474  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0912 17:27:53.434647  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (2.107163ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:53.456516  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.781389ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:53.456784  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0912 17:27:53.472393  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:53.472655  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:53.472706  111116 httplog.go:90] GET /healthz: (1.24173ms) 0 [Go-http-client/1.1 127.0.0.1:41802]
I0912 17:27:53.473622  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.098133ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.483235  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:53.483264  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:53.483335  111116 httplog.go:90] GET /healthz: (777.66µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.494390  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.911713ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.494711  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0912 17:27:53.513656  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.07422ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.534551  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.057955ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.534863  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0912 17:27:53.554711  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.222555ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.573214  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:53.573244  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:53.573275  111116 httplog.go:90] GET /healthz: (1.073218ms) 0 [Go-http-client/1.1 127.0.0.1:41778]
I0912 17:27:53.580424  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (7.957425ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:53.580773  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0912 17:27:53.583221  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:53.583261  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:53.583293  111116 httplog.go:90] GET /healthz: (706.348µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:53.593972  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.436738ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:53.615157  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.621231ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:53.616804  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0912 17:27:53.633534  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.032167ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:53.655572  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.022434ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:53.655803  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0912 17:27:53.672878  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:53.672909  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:53.673027  111116 httplog.go:90] GET /healthz: (1.521434ms) 0 [Go-http-client/1.1 127.0.0.1:41802]
I0912 17:27:53.673533  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.113999ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.683377  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:53.683405  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:53.683456  111116 httplog.go:90] GET /healthz: (883.65µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.694185  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.676741ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.694390  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0912 17:27:53.713968  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.394798ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.734737  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.143359ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.735061  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0912 17:27:53.753980  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.250876ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.772386  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:53.772434  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:53.772476  111116 httplog.go:90] GET /healthz: (991.416µs) 0 [Go-http-client/1.1 127.0.0.1:41778]
I0912 17:27:53.774367  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.636909ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:53.774673  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0912 17:27:53.783338  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:53.783365  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:53.783397  111116 httplog.go:90] GET /healthz: (845.374µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:53.793446  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.022736ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:53.814555  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.951809ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:53.814805  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0912 17:27:53.833677  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.135378ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:53.854420  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.865575ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:53.854693  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0912 17:27:53.872767  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:53.872802  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:53.872839  111116 httplog.go:90] GET /healthz: (1.354386ms) 0 [Go-http-client/1.1 127.0.0.1:41802]
I0912 17:27:53.873500  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.081062ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.883507  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:53.883637  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:53.883834  111116 httplog.go:90] GET /healthz: (1.274367ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.894471  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.896868ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.894654  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0912 17:27:53.913977  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.349228ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.934502  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.949551ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.934738  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0912 17:27:53.954096  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.555789ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.972298  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:53.972327  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:53.972358  111116 httplog.go:90] GET /healthz: (926.041µs) 0 [Go-http-client/1.1 127.0.0.1:41778]
I0912 17:27:53.974324  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.726968ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.974499  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0912 17:27:53.983300  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:53.983324  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:53.983348  111116 httplog.go:90] GET /healthz: (827.747µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:53.993654  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.173502ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.014853  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.219805ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.015235  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0912 17:27:54.034156  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.394543ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.055173  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.632619ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.055434  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0912 17:27:54.072662  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:54.072693  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:54.073326  111116 httplog.go:90] GET /healthz: (1.013092ms) 0 [Go-http-client/1.1 127.0.0.1:41778]
I0912 17:27:54.073726  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.287457ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.083241  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:54.083263  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:54.083286  111116 httplog.go:90] GET /healthz: (767.741µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.094521  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.988351ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.094771  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0912 17:27:54.113682  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.137491ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.134588  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.059254ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.134803  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0912 17:27:54.154077  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.215146ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.173210  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:54.173243  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:54.173280  111116 httplog.go:90] GET /healthz: (1.785593ms) 0 [Go-http-client/1.1 127.0.0.1:41778]
I0912 17:27:54.174644  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.057407ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.174837  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0912 17:27:54.183476  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:54.183503  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:54.183534  111116 httplog.go:90] GET /healthz: (946.003µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.193734  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.235959ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.215191  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.387195ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.215578  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0912 17:27:54.233849  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.232371ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.254518  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.927778ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.254762  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0912 17:27:54.272512  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:54.272539  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:54.272584  111116 httplog.go:90] GET /healthz: (1.115343ms) 0 [Go-http-client/1.1 127.0.0.1:41802]
I0912 17:27:54.273621  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.071131ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.283411  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:54.283573  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:54.283703  111116 httplog.go:90] GET /healthz: (1.08333ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.296097  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.559392ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.296461  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0912 17:27:54.314064  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.494643ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.334567  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.991584ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.334812  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0912 17:27:54.354059  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.372707ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.372412  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:54.372442  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:54.372475  111116 httplog.go:90] GET /healthz: (1.013808ms) 0 [Go-http-client/1.1 127.0.0.1:41778]
I0912 17:27:54.374415  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.767586ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.374633  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0912 17:27:54.383395  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:54.383422  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:54.383464  111116 httplog.go:90] GET /healthz: (900.469µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.393668  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.126152ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.414472  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.877455ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.414730  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0912 17:27:54.433357  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (879.172µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.454046  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.557449ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.454419  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0912 17:27:54.472303  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:54.472342  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:54.472402  111116 httplog.go:90] GET /healthz: (1.023439ms) 0 [Go-http-client/1.1 127.0.0.1:41802]
I0912 17:27:54.473686  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (988.604µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.483277  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:54.483299  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:54.483324  111116 httplog.go:90] GET /healthz: (808.913µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.494757  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.250656ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.495062  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0912 17:27:54.514132  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.508845ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.534540  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.926809ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.534803  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0912 17:27:54.554177  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.605952ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.572735  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:54.572857  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:54.573012  111116 httplog.go:90] GET /healthz: (1.496619ms) 0 [Go-http-client/1.1 127.0.0.1:41802]
I0912 17:27:54.574872  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.344221ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.575143  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0912 17:27:54.583415  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:54.583446  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:54.583482  111116 httplog.go:90] GET /healthz: (900.27µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.593831  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.389468ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.614154  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.685327ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.614326  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0912 17:27:54.633565  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.085596ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.654193  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.615101ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.654404  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0912 17:27:54.673052  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:54.673084  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:54.673120  111116 httplog.go:90] GET /healthz: (1.60747ms) 0 [Go-http-client/1.1 127.0.0.1:41778]
I0912 17:27:54.673793  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.250038ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.683515  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:54.683643  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:54.683833  111116 httplog.go:90] GET /healthz: (1.121389ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.694570  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.888579ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.694910  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0912 17:27:54.713617  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.072472ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.734335  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.768132ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.734634  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0912 17:27:54.753794  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.129381ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.772475  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:54.772525  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:54.772557  111116 httplog.go:90] GET /healthz: (1.036873ms) 0 [Go-http-client/1.1 127.0.0.1:41802]
I0912 17:27:54.774678  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.993299ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.774846  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0912 17:27:54.784199  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:54.784365  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:54.784495  111116 httplog.go:90] GET /healthz: (1.898457ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.793646  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.176548ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.814511  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.947432ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.814748  111116 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0912 17:27:54.833902  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.273588ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.835861  111116 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.354963ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.854556  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.037779ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.854797  111116 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0912 17:27:54.873883  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.388254ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.875093  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:54.875121  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:54.875363  111116 httplog.go:90] GET /healthz: (3.831187ms) 0 [Go-http-client/1.1 127.0.0.1:41778]
I0912 17:27:54.875954  111116 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.376167ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.883588  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:54.883764  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:54.883990  111116 httplog.go:90] GET /healthz: (1.36832ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.894439  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.876998ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.894852  111116 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0912 17:27:54.913800  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.264729ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.915581  111116 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.25926ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.934520  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.984158ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.934812  111116 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0912 17:27:54.953843  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.29146ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.955608  111116 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.226298ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:54.973426  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:54.973611  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:54.973797  111116 httplog.go:90] GET /healthz: (2.044581ms) 0 [Go-http-client/1.1 127.0.0.1:41802]
I0912 17:27:54.974579  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.896593ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.974904  111116 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0912 17:27:54.983664  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:54.983696  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:54.983727  111116 httplog.go:90] GET /healthz: (913.494µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.993577  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.117029ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:54.994941  111116 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.033291ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:55.014301  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.672038ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:55.014668  111116 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0912 17:27:55.033471  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (973.543µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:55.035288  111116 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.202256ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:55.054200  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.678271ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:55.054459  111116 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0912 17:27:55.072369  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:55.072400  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:55.072437  111116 httplog.go:90] GET /healthz: (995.387µs) 0 [Go-http-client/1.1 127.0.0.1:41778]
I0912 17:27:55.073249  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (841.048µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:55.074547  111116 httplog.go:90] GET /api/v1/namespaces/kube-public: (963.244µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:55.083272  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:55.083294  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:55.083320  111116 httplog.go:90] GET /healthz: (744.51µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:55.094086  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.644919ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:55.094273  111116 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0912 17:27:55.113963  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.377316ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:55.115581  111116 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.025172ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:55.134648  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.110564ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:55.134841  111116 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0912 17:27:55.153495  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (980.407µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:55.155022  111116 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.102157ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:55.172834  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:55.173177  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:55.174462  111116 httplog.go:90] GET /healthz: (2.885836ms) 0 [Go-http-client/1.1 127.0.0.1:41802]
I0912 17:27:55.174512  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.878323ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:55.174804  111116 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0912 17:27:55.183664  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:55.183695  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:55.183746  111116 httplog.go:90] GET /healthz: (1.039709ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:55.193974  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.330713ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:55.195712  111116 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.255078ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:55.214556  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.948497ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:55.215022  111116 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0912 17:27:55.233957  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.381009ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:55.235576  111116 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.135789ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:55.254442  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.860292ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:55.254829  111116 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0912 17:27:55.272785  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:55.272908  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:55.273122  111116 httplog.go:90] GET /healthz: (1.531514ms) 0 [Go-http-client/1.1 127.0.0.1:41778]
I0912 17:27:55.273977  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (974.005µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:55.275595  111116 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.218402ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:55.283456  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:55.283582  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:55.283733  111116 httplog.go:90] GET /healthz: (1.081467ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:55.295029  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.459181ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:55.295377  111116 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0912 17:27:55.314069  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.560625ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:55.316349  111116 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.665753ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:55.334779  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.212881ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:55.335383  111116 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0912 17:27:55.353709  111116 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.166382ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:55.355764  111116 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.308432ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:55.372609  111116 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 17:27:55.372639  111116 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 17:27:55.372676  111116 httplog.go:90] GET /healthz: (1.131101ms) 0 [Go-http-client/1.1 127.0.0.1:41802]
I0912 17:27:55.374600  111116 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.91077ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:55.374898  111116 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0912 17:27:55.383910  111116 httplog.go:90] GET /healthz: (1.196266ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:55.385486  111116 httplog.go:90] GET /api/v1/namespaces/default: (1.059393ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:55.387303  111116 httplog.go:90] POST /api/v1/namespaces: (1.477062ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:55.388355  111116 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (803.543µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:55.391987  111116 httplog.go:90] POST /api/v1/namespaces/default/services: (3.323345ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:55.393294  111116 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (945.214µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:55.395123  111116 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.40746ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:55.472445  111116 httplog.go:90] GET /healthz: (937.798µs) 200 [Go-http-client/1.1 127.0.0.1:41778]
W0912 17:27:55.473073  111116 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0912 17:27:55.473180  111116 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0912 17:27:55.473268  111116 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0912 17:27:55.473330  111116 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0912 17:27:55.473375  111116 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0912 17:27:55.473421  111116 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0912 17:27:55.473457  111116 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0912 17:27:55.473485  111116 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0912 17:27:55.473512  111116 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0912 17:27:55.473577  111116 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0912 17:27:55.473619  111116 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0912 17:27:55.473663  111116 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0912 17:27:55.473686  111116 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0912 17:27:55.474142  111116 reflector.go:120] Starting reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.474208  111116 reflector.go:158] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.474358  111116 reflector.go:120] Starting reflector *v1beta1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.474379  111116 reflector.go:158] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.474518  111116 reflector.go:120] Starting reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.474531  111116 reflector.go:158] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.474695  111116 reflector.go:120] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.474706  111116 reflector.go:158] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.474939  111116 reflector.go:120] Starting reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.474954  111116 reflector.go:158] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.475283  111116 reflector.go:120] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.475398  111116 reflector.go:158] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.475644  111116 reflector.go:120] Starting reflector *v1beta1.CSINode (0s) from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.475673  111116 reflector.go:158] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.475754  111116 reflector.go:120] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.475776  111116 reflector.go:158] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.475828  111116 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (338.342µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41968]
I0912 17:27:55.475591  111116 reflector.go:120] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.475909  111116 reflector.go:158] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.474200  111116 reflector.go:120] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.476150  111116 reflector.go:120] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.476159  111116 reflector.go:158] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.476262  111116 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (1.07409ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41966]
I0912 17:27:55.476163  111116 reflector.go:158] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.476594  111116 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (1.33612ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41964]
I0912 17:27:55.475359  111116 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (391.842µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41802]
I0912 17:27:55.476970  111116 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (488.765µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41970]
I0912 17:27:55.477092  111116 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (423.165µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41966]
I0912 17:27:55.477091  111116 get.go:250] Starting watch for /api/v1/pods, rv=58690 labels= fields= timeout=9m9s
I0912 17:27:55.477364  111116 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=58691 labels= fields= timeout=5m1s
I0912 17:27:55.477391  111116 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (322.661µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41972]
I0912 17:27:55.477673  111116 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=58692 labels= fields= timeout=9m51s
I0912 17:27:55.477761  111116 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=58690 labels= fields= timeout=7m25s
I0912 17:27:55.477818  111116 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=58691 labels= fields= timeout=7m46s
I0912 17:27:55.477888  111116 get.go:250] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=58691 labels= fields= timeout=8m33s
I0912 17:27:55.478326  111116 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (931.907µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41968]
I0912 17:27:55.478373  111116 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (488.334µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41974]
I0912 17:27:55.478699  111116 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=58692 labels= fields= timeout=5m35s
I0912 17:27:55.479017  111116 get.go:250] Starting watch for /api/v1/nodes, rv=58690 labels= fields= timeout=5m14s
I0912 17:27:55.479059  111116 get.go:250] Starting watch for /api/v1/services, rv=58933 labels= fields= timeout=7m25s
I0912 17:27:55.479078  111116 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (2.346882ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41982]
I0912 17:27:55.479838  111116 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (5.386628ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:27:55.480345  111116 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=58690 labels= fields= timeout=6m20s
I0912 17:27:55.480560  111116 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=58690 labels= fields= timeout=8m23s
I0912 17:27:55.574179  111116 shared_informer.go:227] caches populated
I0912 17:27:55.574220  111116 shared_informer.go:227] caches populated
I0912 17:27:55.574229  111116 shared_informer.go:227] caches populated
I0912 17:27:55.574237  111116 shared_informer.go:227] caches populated
I0912 17:27:55.574243  111116 shared_informer.go:227] caches populated
I0912 17:27:55.574248  111116 shared_informer.go:227] caches populated
I0912 17:27:55.574255  111116 shared_informer.go:227] caches populated
I0912 17:27:55.574260  111116 shared_informer.go:227] caches populated
I0912 17:27:55.574265  111116 shared_informer.go:227] caches populated
I0912 17:27:55.574276  111116 shared_informer.go:227] caches populated
I0912 17:27:55.574282  111116 shared_informer.go:227] caches populated
I0912 17:27:55.574291  111116 shared_informer.go:227] caches populated
I0912 17:27:55.574500  111116 plugins.go:630] Loaded volume plugin "kubernetes.io/mock-provisioner"
W0912 17:27:55.574542  111116 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0912 17:27:55.574586  111116 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0912 17:27:55.574608  111116 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0912 17:27:55.574622  111116 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0912 17:27:55.574634  111116 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0912 17:27:55.574686  111116 pv_controller_base.go:282] Starting persistent volume controller
I0912 17:27:55.574800  111116 shared_informer.go:197] Waiting for caches to sync for persistent volume
I0912 17:27:55.575086  111116 reflector.go:120] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.575107  111116 reflector.go:120] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.575115  111116 reflector.go:158] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.575121  111116 reflector.go:120] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.575124  111116 reflector.go:158] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.575136  111116 reflector.go:158] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.575180  111116 reflector.go:120] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.575099  111116 reflector.go:120] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.575202  111116 reflector.go:158] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.575193  111116 reflector.go:158] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0912 17:27:55.576525  111116 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (597.125µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41992]
I0912 17:27:55.576647  111116 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (566.095µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41988]
I0912 17:27:55.576679  111116 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (444.639µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41986]
I0912 17:27:55.576770  111116 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (400.796µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41990]
I0912 17:27:55.577127  111116 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (400.845µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41992]
I0912 17:27:55.577281  111116 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=58691 labels= fields= timeout=7m2s
I0912 17:27:55.577359  111116 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=58690 labels= fields= timeout=9m16s
I0912 17:27:55.577470  111116 get.go:250] Starting watch for /api/v1/nodes, rv=58690 labels= fields= timeout=9m19s
I0912 17:27:55.577778  111116 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=58690 labels= fields= timeout=5m21s
I0912 17:27:55.577902  111116 get.go:250] Starting watch for /api/v1/pods, rv=58690 labels= fields= timeout=8m22s
I0912 17:27:55.674836  111116 shared_informer.go:227] caches populated
I0912 17:27:55.675059  111116 shared_informer.go:227] caches populated
I0912 17:27:55.675170  111116 shared_informer.go:227] caches populated
I0912 17:27:55.675261  111116 shared_informer.go:227] caches populated
I0912 17:27:55.675362  111116 shared_informer.go:227] caches populated
I0912 17:27:55.675013  111116 shared_informer.go:227] caches populated
I0912 17:27:55.675532  111116 shared_informer.go:204] Caches are synced for persistent volume 
I0912 17:27:55.675551  111116 pv_controller_base.go:158] controller initialized
I0912 17:27:55.675632  111116 pv_controller_base.go:419] resyncing PV controller
I0912 17:27:55.678264  111116 httplog.go:90] POST /api/v1/nodes: (2.184479ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:55.678908  111116 node_tree.go:93] Added node "node-1" in group "" to NodeTree
I0912 17:27:55.680145  111116 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.437183ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:55.682171  111116 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.630433ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:55.682379  111116 volume_binding_test.go:751] Running test wait one bound, one provisioned
I0912 17:27:55.684380  111116 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.696816ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:55.685968  111116 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.158321ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:55.687606  111116 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.23523ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:55.689716  111116 httplog.go:90] POST /api/v1/persistentvolumes: (1.637544ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:55.690289  111116 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-w-canbind", version 58949
I0912 17:27:55.690340  111116 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind]: phase: Pending, bound to: "", boundByController: false
I0912 17:27:55.690364  111116 pv_controller.go:494] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I0912 17:27:55.690371  111116 pv_controller.go:777] updating PersistentVolume[pv-w-canbind]: set phase Available
I0912 17:27:55.692108  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims: (1.877685ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:55.692614  111116 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind", version 58950
I0912 17:27:55.692636  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:27:55.692683  111116 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind]: no volume found
I0912 17:27:55.692705  111116 pv_controller.go:683] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind] status: set phase Pending
I0912 17:27:55.692720  111116 pv_controller.go:728] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind] status: phase Pending already set
I0912 17:27:55.693056  111116 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306", Name:"pvc-w-canbind", UID:"39fdc635-9b32-4fd7-b079-b0a47ef3560f", APIVersion:"v1", ResourceVersion:"58950", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0912 17:27:55.695414  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims: (2.556121ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:55.695444  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/events: (2.133391ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42012]
I0912 17:27:55.695546  111116 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision", version 58952
I0912 17:27:55.695566  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:27:55.695588  111116 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: no volume found
I0912 17:27:55.695603  111116 pv_controller.go:683] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] status: set phase Pending
I0912 17:27:55.695619  111116 pv_controller.go:728] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] status: phase Pending already set
I0912 17:27:55.695639  111116 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306", Name:"pvc-canprovision", UID:"ac7c5958-215c-4638-b175-4d29333b0c88", APIVersion:"v1", ResourceVersion:"58952", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0912 17:27:55.697836  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/events: (1.455836ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:55.698708  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods: (2.3345ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42012]
I0912 17:27:55.699047  111116 scheduling_queue.go:830] About to try and schedule pod volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-canbind-or-provision
I0912 17:27:55.699063  111116 scheduler.go:530] Attempting to schedule pod: volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-canbind-or-provision
I0912 17:27:55.699241  111116 scheduler_binder.go:679] No matching volumes for Pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-canbind-or-provision", PVC "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind" on node "node-1"
I0912 17:27:55.699264  111116 scheduler_binder.go:679] No matching volumes for Pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-canbind-or-provision", PVC "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" on node "node-1"
I0912 17:27:55.699283  111116 scheduler_binder.go:734] Provisioning for claims of pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-canbind-or-provision" that has no matching volumes on node "node-1" ...
I0912 17:27:55.699335  111116 scheduler_binder.go:257] AssumePodVolumes for pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-canbind-or-provision", node "node-1"
I0912 17:27:55.699355  111116 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind", version 58950
I0912 17:27:55.699366  111116 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision", version 58952
I0912 17:27:55.699411  111116 scheduler_binder.go:332] BindPodVolumes for pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-canbind-or-provision", node "node-1"
I0912 17:27:55.700089  111116 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (9.246043ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42010]
I0912 17:27:55.700402  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind" with version 58955
I0912 17:27:55.700425  111116 pv_controller.go:798] volume "pv-w-canbind" entered phase "Available"
I0912 17:27:55.700546  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind" with version 58955
I0912 17:27:55.700592  111116 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind]: phase: Available, bound to: "", boundByController: false
I0912 17:27:55.700614  111116 pv_controller.go:494] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I0912 17:27:55.700621  111116 pv_controller.go:777] updating PersistentVolume[pv-w-canbind]: set phase Available
I0912 17:27:55.700629  111116 pv_controller.go:780] updating PersistentVolume[pv-w-canbind]: phase Available already set
I0912 17:27:55.702115  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-w-canbind: (2.503102ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42012]
I0912 17:27:55.702714  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind" with version 58956
I0912 17:27:55.702733  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:27:55.702752  111116 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind]: no volume found
I0912 17:27:55.702761  111116 pv_controller.go:1326] provisionClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind]: started
I0912 17:27:55.702776  111116 pv_controller.go:1631] scheduleOperation[provision-volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind[39fdc635-9b32-4fd7-b079-b0a47ef3560f]]
I0912 17:27:55.702815  111116 pv_controller.go:1372] provisionClaimOperation [volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind] started, class: "wait-t9kg"
I0912 17:27:55.704891  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-w-canbind: (1.890856ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:55.705110  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind" with version 58957
I0912 17:27:55.705339  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind" with version 58957
I0912 17:27:55.705361  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:27:55.705380  111116 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind]: no volume found
I0912 17:27:55.705388  111116 pv_controller.go:1326] provisionClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind]: started
I0912 17:27:55.705400  111116 pv_controller.go:1631] scheduleOperation[provision-volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind[39fdc635-9b32-4fd7-b079-b0a47ef3560f]]
I0912 17:27:55.705407  111116 pv_controller.go:1642] operation "provision-volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind[39fdc635-9b32-4fd7-b079-b0a47ef3560f]" is already running, skipping
I0912 17:27:55.706149  111116 httplog.go:90] GET /api/v1/persistentvolumes/pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f: (881.513µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:55.706375  111116 pv_controller.go:1476] volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" for claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind" created
I0912 17:27:55.706399  111116 pv_controller.go:1493] provisionClaimOperation [volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind]: trying to save volume pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f
I0912 17:27:55.706979  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-canprovision: (3.361664ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42010]
I0912 17:27:55.707139  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 58958
I0912 17:27:55.707161  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:27:55.707181  111116 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: no volume found
I0912 17:27:55.707188  111116 pv_controller.go:1326] provisionClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: started
I0912 17:27:55.707198  111116 pv_controller.go:1631] scheduleOperation[provision-volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision[ac7c5958-215c-4638-b175-4d29333b0c88]]
I0912 17:27:55.707231  111116 pv_controller.go:1372] provisionClaimOperation [volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] started, class: "wait-t9kg"
I0912 17:27:55.708149  111116 httplog.go:90] POST /api/v1/persistentvolumes: (1.607358ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:55.708321  111116 pv_controller.go:1501] volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" for claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind" saved
I0912 17:27:55.708342  111116 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f", version 58959
I0912 17:27:55.708367  111116 pv_controller.go:1554] volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" provisioned for claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind"
I0912 17:27:55.708500  111116 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306", Name:"pvc-w-canbind", UID:"39fdc635-9b32-4fd7-b079-b0a47ef3560f", APIVersion:"v1", ResourceVersion:"58957", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f using kubernetes.io/mock-provisioner
I0912 17:27:55.708427  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" with version 58959
I0912 17:27:55.709236  111116 pv_controller.go:489] synchronizing PersistentVolume[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: phase: Pending, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind (uid: 39fdc635-9b32-4fd7-b079-b0a47ef3560f)", boundByController: true
I0912 17:27:55.709277  111116 pv_controller.go:514] synchronizing PersistentVolume[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind
I0912 17:27:55.709297  111116 pv_controller.go:555] synchronizing PersistentVolume[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:27:55.709314  111116 pv_controller.go:603] synchronizing PersistentVolume[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: volume not bound yet, waiting for syncClaim to fix it
I0912 17:27:55.709345  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind" with version 58957
I0912 17:27:55.709359  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:27:55.709386  111116 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind]: volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" found: phase: Pending, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind (uid: 39fdc635-9b32-4fd7-b079-b0a47ef3560f)", boundByController: true
I0912 17:27:55.709399  111116 pv_controller.go:931] binding volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind"
I0912 17:27:55.709419  111116 pv_controller.go:829] updating PersistentVolume[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: binding to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind"
I0912 17:27:55.709437  111116 pv_controller.go:841] updating PersistentVolume[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: already bound to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind"
I0912 17:27:55.709447  111116 pv_controller.go:777] updating PersistentVolume[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: set phase Bound
I0912 17:27:55.710138  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/events: (1.578075ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:55.710552  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-canprovision: (3.009065ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42010]
I0912 17:27:55.710865  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 58960
I0912 17:27:55.712145  111116 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f/status: (1.873864ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42014]
I0912 17:27:55.712460  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" with version 58962
I0912 17:27:55.712503  111116 pv_controller.go:489] synchronizing PersistentVolume[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind (uid: 39fdc635-9b32-4fd7-b079-b0a47ef3560f)", boundByController: true
I0912 17:27:55.712516  111116 pv_controller.go:514] synchronizing PersistentVolume[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind
I0912 17:27:55.712533  111116 pv_controller.go:555] synchronizing PersistentVolume[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:27:55.712549  111116 pv_controller.go:603] synchronizing PersistentVolume[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: volume not bound yet, waiting for syncClaim to fix it
I0912 17:27:55.712595  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" with version 58962
I0912 17:27:55.712650  111116 pv_controller.go:798] volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" entered phase "Bound"
I0912 17:27:55.712684  111116 pv_controller.go:869] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind]: binding to "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f"
I0912 17:27:55.712719  111116 pv_controller.go:901] volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" bound to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind"
I0912 17:27:55.712995  111116 httplog.go:90] GET /api/v1/persistentvolumes/pvc-ac7c5958-215c-4638-b175-4d29333b0c88: (1.792291ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42010]
I0912 17:27:55.713340  111116 pv_controller.go:1476] volume "pvc-ac7c5958-215c-4638-b175-4d29333b0c88" for claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" created
I0912 17:27:55.713373  111116 pv_controller.go:1493] provisionClaimOperation [volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: trying to save volume pvc-ac7c5958-215c-4638-b175-4d29333b0c88
I0912 17:27:55.714735  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-w-canbind: (1.771805ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42014]
I0912 17:27:55.715261  111116 httplog.go:90] POST /api/v1/persistentvolumes: (1.71403ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42010]
I0912 17:27:55.715788  111116 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-ac7c5958-215c-4638-b175-4d29333b0c88", version 58964
I0912 17:27:55.715912  111116 pv_controller.go:489] synchronizing PersistentVolume[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: phase: Pending, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: ac7c5958-215c-4638-b175-4d29333b0c88)", boundByController: true
I0912 17:27:55.716137  111116 pv_controller.go:514] synchronizing PersistentVolume[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision
I0912 17:27:55.716259  111116 pv_controller.go:555] synchronizing PersistentVolume[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:27:55.716371  111116 pv_controller.go:603] synchronizing PersistentVolume[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: volume not bound yet, waiting for syncClaim to fix it
I0912 17:27:55.716577  111116 pv_controller.go:1501] volume "pvc-ac7c5958-215c-4638-b175-4d29333b0c88" for claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" saved
I0912 17:27:55.716685  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-ac7c5958-215c-4638-b175-4d29333b0c88" with version 58964
I0912 17:27:55.716824  111116 pv_controller.go:1554] volume "pvc-ac7c5958-215c-4638-b175-4d29333b0c88" provisioned for claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:27:55.715846  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind" with version 58963
I0912 17:27:55.717093  111116 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306", Name:"pvc-canprovision", UID:"ac7c5958-215c-4638-b175-4d29333b0c88", APIVersion:"v1", ResourceVersion:"58960", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-ac7c5958-215c-4638-b175-4d29333b0c88 using kubernetes.io/mock-provisioner
I0912 17:27:55.717132  111116 pv_controller.go:912] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind]: bound to "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f"
I0912 17:27:55.717405  111116 pv_controller.go:683] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind] status: set phase Bound
I0912 17:27:55.718682  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/events: (1.401485ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42014]
I0912 17:27:55.719165  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-w-canbind/status: (1.499016ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:55.719691  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind" with version 58966
I0912 17:27:55.719725  111116 pv_controller.go:742] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind" entered phase "Bound"
I0912 17:27:55.719781  111116 pv_controller.go:957] volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" bound to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind"
I0912 17:27:55.719802  111116 pv_controller.go:958] volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" status after binding: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind (uid: 39fdc635-9b32-4fd7-b079-b0a47ef3560f)", boundByController: true
I0912 17:27:55.719827  111116 pv_controller.go:959] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind" status after binding: phase: Bound, bound to: "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f", bindCompleted: true, boundByController: true
I0912 17:27:55.719863  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 58960
I0912 17:27:55.719877  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:27:55.719900  111116 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: volume "pvc-ac7c5958-215c-4638-b175-4d29333b0c88" found: phase: Pending, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: ac7c5958-215c-4638-b175-4d29333b0c88)", boundByController: true
I0912 17:27:55.719911  111116 pv_controller.go:931] binding volume "pvc-ac7c5958-215c-4638-b175-4d29333b0c88" to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:27:55.719952  111116 pv_controller.go:829] updating PersistentVolume[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: binding to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:27:55.719968  111116 pv_controller.go:841] updating PersistentVolume[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: already bound to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:27:55.719989  111116 pv_controller.go:777] updating PersistentVolume[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: set phase Bound
I0912 17:27:55.721800  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-ac7c5958-215c-4638-b175-4d29333b0c88" with version 58967
I0912 17:27:55.721833  111116 pv_controller.go:489] synchronizing PersistentVolume[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: ac7c5958-215c-4638-b175-4d29333b0c88)", boundByController: true
I0912 17:27:55.721842  111116 pv_controller.go:514] synchronizing PersistentVolume[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision
I0912 17:27:55.721854  111116 pv_controller.go:555] synchronizing PersistentVolume[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:27:55.721864  111116 pv_controller.go:603] synchronizing PersistentVolume[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: volume not bound yet, waiting for syncClaim to fix it
I0912 17:27:55.722332  111116 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-ac7c5958-215c-4638-b175-4d29333b0c88/status: (2.146518ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:55.722634  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-ac7c5958-215c-4638-b175-4d29333b0c88" with version 58967
I0912 17:27:55.722655  111116 pv_controller.go:798] volume "pvc-ac7c5958-215c-4638-b175-4d29333b0c88" entered phase "Bound"
I0912 17:27:55.722665  111116 pv_controller.go:869] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: binding to "pvc-ac7c5958-215c-4638-b175-4d29333b0c88"
I0912 17:27:55.722679  111116 pv_controller.go:901] volume "pvc-ac7c5958-215c-4638-b175-4d29333b0c88" bound to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:27:55.724695  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-canprovision: (1.808181ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:55.724979  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 58969
I0912 17:27:55.725007  111116 pv_controller.go:912] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: bound to "pvc-ac7c5958-215c-4638-b175-4d29333b0c88"
I0912 17:27:55.725015  111116 pv_controller.go:683] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] status: set phase Bound
I0912 17:27:55.726638  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-canprovision/status: (1.435004ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:55.726907  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 58970
I0912 17:27:55.726987  111116 pv_controller.go:742] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" entered phase "Bound"
I0912 17:27:55.727005  111116 pv_controller.go:957] volume "pvc-ac7c5958-215c-4638-b175-4d29333b0c88" bound to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:27:55.727029  111116 pv_controller.go:958] volume "pvc-ac7c5958-215c-4638-b175-4d29333b0c88" status after binding: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: ac7c5958-215c-4638-b175-4d29333b0c88)", boundByController: true
I0912 17:27:55.727051  111116 pv_controller.go:959] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-ac7c5958-215c-4638-b175-4d29333b0c88", bindCompleted: true, boundByController: true
I0912 17:27:55.727096  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind" with version 58966
I0912 17:27:55.727113  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind]: phase: Bound, bound to: "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f", bindCompleted: true, boundByController: true
I0912 17:27:55.727131  111116 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind]: volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" found: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind (uid: 39fdc635-9b32-4fd7-b079-b0a47ef3560f)", boundByController: true
I0912 17:27:55.727143  111116 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind]: claim is already correctly bound
I0912 17:27:55.727154  111116 pv_controller.go:931] binding volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind"
I0912 17:27:55.727166  111116 pv_controller.go:829] updating PersistentVolume[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: binding to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind"
I0912 17:27:55.727186  111116 pv_controller.go:841] updating PersistentVolume[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: already bound to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind"
I0912 17:27:55.727196  111116 pv_controller.go:777] updating PersistentVolume[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: set phase Bound
I0912 17:27:55.727204  111116 pv_controller.go:780] updating PersistentVolume[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: phase Bound already set
I0912 17:27:55.727214  111116 pv_controller.go:869] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind]: binding to "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f"
I0912 17:27:55.727234  111116 pv_controller.go:916] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind]: already bound to "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f"
I0912 17:27:55.727243  111116 pv_controller.go:683] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind] status: set phase Bound
I0912 17:27:55.727262  111116 pv_controller.go:728] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind] status: phase Bound already set
I0912 17:27:55.727276  111116 pv_controller.go:957] volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" bound to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind"
I0912 17:27:55.727296  111116 pv_controller.go:958] volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" status after binding: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind (uid: 39fdc635-9b32-4fd7-b079-b0a47ef3560f)", boundByController: true
I0912 17:27:55.727312  111116 pv_controller.go:959] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind" status after binding: phase: Bound, bound to: "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f", bindCompleted: true, boundByController: true
I0912 17:27:55.727336  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 58970
I0912 17:27:55.727359  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: phase: Bound, bound to: "pvc-ac7c5958-215c-4638-b175-4d29333b0c88", bindCompleted: true, boundByController: true
I0912 17:27:55.727375  111116 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: volume "pvc-ac7c5958-215c-4638-b175-4d29333b0c88" found: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: ac7c5958-215c-4638-b175-4d29333b0c88)", boundByController: true
I0912 17:27:55.727385  111116 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: claim is already correctly bound
I0912 17:27:55.727401  111116 pv_controller.go:931] binding volume "pvc-ac7c5958-215c-4638-b175-4d29333b0c88" to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:27:55.727411  111116 pv_controller.go:829] updating PersistentVolume[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: binding to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:27:55.727425  111116 pv_controller.go:841] updating PersistentVolume[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: already bound to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:27:55.727433  111116 pv_controller.go:777] updating PersistentVolume[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: set phase Bound
I0912 17:27:55.727442  111116 pv_controller.go:780] updating PersistentVolume[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: phase Bound already set
I0912 17:27:55.727449  111116 pv_controller.go:869] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: binding to "pvc-ac7c5958-215c-4638-b175-4d29333b0c88"
I0912 17:27:55.727470  111116 pv_controller.go:916] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: already bound to "pvc-ac7c5958-215c-4638-b175-4d29333b0c88"
I0912 17:27:55.727479  111116 pv_controller.go:683] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] status: set phase Bound
I0912 17:27:55.727497  111116 pv_controller.go:728] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] status: phase Bound already set
I0912 17:27:55.727509  111116 pv_controller.go:957] volume "pvc-ac7c5958-215c-4638-b175-4d29333b0c88" bound to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:27:55.727529  111116 pv_controller.go:958] volume "pvc-ac7c5958-215c-4638-b175-4d29333b0c88" status after binding: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: ac7c5958-215c-4638-b175-4d29333b0c88)", boundByController: true
I0912 17:27:55.727545  111116 pv_controller.go:959] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-ac7c5958-215c-4638-b175-4d29333b0c88", bindCompleted: true, boundByController: true
I0912 17:27:55.801389  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-canbind-or-provision: (1.743839ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:55.901406  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-canbind-or-provision: (1.712185ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:56.001761  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-canbind-or-provision: (2.021044ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:56.101327  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-canbind-or-provision: (1.680314ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:56.201402  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-canbind-or-provision: (1.765444ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:56.301194  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-canbind-or-provision: (1.599652ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:56.401279  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-canbind-or-provision: (1.633092ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:56.473273  111116 cache.go:669] Couldn't expire cache for pod volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-canbind-or-provision. Binding is still in progress.
I0912 17:27:56.501175  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-canbind-or-provision: (1.574312ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:56.601247  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-canbind-or-provision: (1.602491ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:56.701186  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-canbind-or-provision: (1.569422ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:56.707417  111116 scheduler_binder.go:546] All PVCs for pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-canbind-or-provision" are bound
I0912 17:27:56.707513  111116 factory.go:606] Attempting to bind pod-pvc-canbind-or-provision to node-1
I0912 17:27:56.710102  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-canbind-or-provision/binding: (2.298226ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:56.710591  111116 scheduler.go:662] pod volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-canbind-or-provision is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0912 17:27:56.712296  111116 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/events: (1.439214ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:56.801500  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-canbind-or-provision: (1.849415ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:56.803814  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-w-canbind: (1.656806ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:56.805804  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-canprovision: (1.363854ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:56.807644  111116 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind: (1.393677ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:56.814125  111116 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods: (5.794895ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:56.819088  111116 pv_controller_base.go:258] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" deleted
I0912 17:27:56.819238  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-ac7c5958-215c-4638-b175-4d29333b0c88" with version 58967
I0912 17:27:56.819327  111116 pv_controller.go:489] synchronizing PersistentVolume[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: ac7c5958-215c-4638-b175-4d29333b0c88)", boundByController: true
I0912 17:27:56.819406  111116 pv_controller.go:514] synchronizing PersistentVolume[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision
I0912 17:27:56.821105  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-canprovision: (1.398506ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42014]
I0912 17:27:56.821497  111116 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims: (6.609315ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:56.821527  111116 pv_controller.go:547] synchronizing PersistentVolume[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision not found
I0912 17:27:56.821727  111116 pv_controller.go:575] volume "pvc-ac7c5958-215c-4638-b175-4d29333b0c88" is released and reclaim policy "Delete" will be executed
I0912 17:27:56.821785  111116 pv_controller.go:777] updating PersistentVolume[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: set phase Released
I0912 17:27:56.821873  111116 pv_controller_base.go:258] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind" deleted
I0912 17:27:56.824365  111116 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-ac7c5958-215c-4638-b175-4d29333b0c88/status: (2.146882ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42014]
I0912 17:27:56.824599  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-ac7c5958-215c-4638-b175-4d29333b0c88" with version 59007
I0912 17:27:56.824625  111116 pv_controller.go:798] volume "pvc-ac7c5958-215c-4638-b175-4d29333b0c88" entered phase "Released"
I0912 17:27:56.824638  111116 pv_controller.go:1022] reclaimVolume[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: policy is Delete
I0912 17:27:56.824662  111116 pv_controller.go:1631] scheduleOperation[delete-pvc-ac7c5958-215c-4638-b175-4d29333b0c88[285fcbc8-a840-4b2b-894d-48212088ae38]]
I0912 17:27:56.824691  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" with version 58962
I0912 17:27:56.824718  111116 pv_controller.go:489] synchronizing PersistentVolume[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind (uid: 39fdc635-9b32-4fd7-b079-b0a47ef3560f)", boundByController: true
I0912 17:27:56.824731  111116 pv_controller.go:514] synchronizing PersistentVolume[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind
I0912 17:27:56.824866  111116 pv_controller.go:1146] deleteVolumeOperation [pvc-ac7c5958-215c-4638-b175-4d29333b0c88] started
I0912 17:27:56.826993  111116 httplog.go:90] GET /api/v1/persistentvolumes/pvc-ac7c5958-215c-4638-b175-4d29333b0c88: (1.567681ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42026]
I0912 17:27:56.827136  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-w-canbind: (2.213048ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42014]
I0912 17:27:56.827238  111116 pv_controller.go:1250] isVolumeReleased[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: volume is released
I0912 17:27:56.827251  111116 pv_controller.go:1285] doDeleteVolume [pvc-ac7c5958-215c-4638-b175-4d29333b0c88]
I0912 17:27:56.827278  111116 pv_controller.go:1316] volume "pvc-ac7c5958-215c-4638-b175-4d29333b0c88" deleted
I0912 17:27:56.827285  111116 pv_controller.go:1193] deleteVolumeOperation [pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: success
I0912 17:27:56.827356  111116 pv_controller.go:547] synchronizing PersistentVolume[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind not found
I0912 17:27:56.827377  111116 pv_controller.go:575] volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" is released and reclaim policy "Delete" will be executed
I0912 17:27:56.827390  111116 pv_controller.go:777] updating PersistentVolume[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: set phase Released
I0912 17:27:56.829349  111116 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f/status: (1.758504ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42026]
I0912 17:27:56.829575  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" with version 59009
I0912 17:27:56.829596  111116 pv_controller.go:798] volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" entered phase "Released"
I0912 17:27:56.829609  111116 pv_controller.go:1022] reclaimVolume[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: policy is Delete
I0912 17:27:56.829625  111116 pv_controller.go:1631] scheduleOperation[delete-pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f[e8a9acef-4f25-480e-b488-66ed24339bcd]]
I0912 17:27:56.829651  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-ac7c5958-215c-4638-b175-4d29333b0c88" with version 59007
I0912 17:27:56.829673  111116 pv_controller.go:489] synchronizing PersistentVolume[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: phase: Released, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: ac7c5958-215c-4638-b175-4d29333b0c88)", boundByController: true
I0912 17:27:56.829682  111116 pv_controller.go:514] synchronizing PersistentVolume[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision
I0912 17:27:56.829702  111116 pv_controller.go:547] synchronizing PersistentVolume[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision not found
I0912 17:27:56.829709  111116 pv_controller.go:1022] reclaimVolume[pvc-ac7c5958-215c-4638-b175-4d29333b0c88]: policy is Delete
I0912 17:27:56.829733  111116 pv_controller.go:1631] scheduleOperation[delete-pvc-ac7c5958-215c-4638-b175-4d29333b0c88[285fcbc8-a840-4b2b-894d-48212088ae38]]
I0912 17:27:56.829740  111116 pv_controller.go:1642] operation "delete-pvc-ac7c5958-215c-4638-b175-4d29333b0c88[285fcbc8-a840-4b2b-894d-48212088ae38]" is already running, skipping
I0912 17:27:56.829759  111116 pv_controller_base.go:212] volume "pv-w-canbind" deleted
I0912 17:27:56.829783  111116 pv_controller.go:1146] deleteVolumeOperation [pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f] started
I0912 17:27:56.830007  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" with version 59009
I0912 17:27:56.830046  111116 pv_controller.go:489] synchronizing PersistentVolume[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: phase: Released, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind (uid: 39fdc635-9b32-4fd7-b079-b0a47ef3560f)", boundByController: true
I0912 17:27:56.830060  111116 pv_controller.go:514] synchronizing PersistentVolume[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind
I0912 17:27:56.830082  111116 pv_controller.go:547] synchronizing PersistentVolume[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind not found
I0912 17:27:56.830090  111116 pv_controller.go:1022] reclaimVolume[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: policy is Delete
I0912 17:27:56.830105  111116 pv_controller.go:1631] scheduleOperation[delete-pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f[e8a9acef-4f25-480e-b488-66ed24339bcd]]
I0912 17:27:56.830113  111116 pv_controller.go:1642] operation "delete-pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f[e8a9acef-4f25-480e-b488-66ed24339bcd]" is already running, skipping
I0912 17:27:56.831269  111116 httplog.go:90] GET /api/v1/persistentvolumes/pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f: (1.243445ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42026]
I0912 17:27:56.831493  111116 pv_controller.go:1250] isVolumeReleased[pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: volume is released
I0912 17:27:56.831510  111116 pv_controller.go:1285] doDeleteVolume [pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]
I0912 17:27:56.831535  111116 pv_controller.go:1316] volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" deleted
I0912 17:27:56.831544  111116 pv_controller.go:1193] deleteVolumeOperation [pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f]: success
I0912 17:27:56.832432  111116 httplog.go:90] DELETE /api/v1/persistentvolumes/pvc-ac7c5958-215c-4638-b175-4d29333b0c88: (5.030938ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42014]
I0912 17:27:56.833132  111116 httplog.go:90] DELETE /api/v1/persistentvolumes/pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f: (1.399114ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42026]
I0912 17:27:56.833285  111116 pv_controller_base.go:212] volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" deleted
I0912 17:27:56.833305  111116 pv_controller.go:1200] failed to delete volume "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" from database: persistentvolumes "pvc-39fdc635-9b32-4fd7-b079-b0a47ef3560f" not found
I0912 17:27:56.833330  111116 pv_controller_base.go:212] volume "pvc-ac7c5958-215c-4638-b175-4d29333b0c88" deleted
I0912 17:27:56.833404  111116 pv_controller_base.go:396] deletion of claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind" was already processed
I0912 17:27:56.833499  111116 pv_controller_base.go:396] deletion of claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" was already processed
I0912 17:27:56.833680  111116 httplog.go:90] DELETE /api/v1/persistentvolumes: (11.720159ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0912 17:27:56.846882  111116 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (12.729298ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42026]
I0912 17:27:56.847186  111116 volume_binding_test.go:751] Running test one immediate pv prebound, one wait provisioned
I0912 17:27:56.848386  111116 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.011597ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42026]
I0912 17:27:56.850322  111116 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.577637ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42026]
I0912 17:27:56.855995  111116 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.040077ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42026]
I0912 17:27:56.858158  111116 httplog.go:90] POST /api/v1/persistentvolumes: (1.624899ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42026]
I0912 17:27:56.858410  111116 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-i-prebound", version 59021
I0912 17:27:56.858460  111116 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Pending, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound (uid: )", boundByController: false
I0912 17:27:56.858469  111116 pv_controller.go:506] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound
I0912 17:27:56.858478  111116 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Available
I0912 17:27:56.860455  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims: (1.757674ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42026]
I0912 17:27:56.860631  111116 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound", version 59022
I0912 17:27:56.860679  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:27:56.860712  111116 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Pending, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound (uid: )", boundByController: false
I0912 17:27:56.860734  111116 pv_controller.go:931] binding volume "pv-i-prebound" to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound"
I0912 17:27:56.860747  111116 pv_controller.go:829] updating PersistentVolume[pv-i-prebound]: binding to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound"
I0912 17:27:56.860770  111116 pv_controller.go:849] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound" bound to volume "pv-i-prebound"
I0912 17:27:56.860716  111116 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (1.985435ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42014]
I0912 17:27:56.861156  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 59023
I0912 17:27:56.861184  111116 pv_controller.go:798] volume "pv-i-prebound" entered phase "Available"
I0912 17:27:56.861211  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 59023
I0912 17:27:56.861235  111116 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound (uid: )", boundByController: false
I0912 17:27:56.861243  111116 pv_controller.go:506] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound
I0912 17:27:56.861249  111116 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Available
I0912 17:27:56.861265  111116 pv_controller.go:780] updating PersistentVolume[pv-i-prebound]: phase Available already set
I0912 17:27:56.862539  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims: (1.743296ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42026]
I0912 17:27:56.863481  111116 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound: (2.119838ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42014]
I0912 17:27:56.863689  111116 pv_controller.go:852] updating PersistentVolume[pv-i-prebound]: binding to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I0912 17:27:56.863723  111116 pv_controller.go:934] error binding volume "pv-i-prebound" to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I0912 17:27:56.863740  111116 pv_controller_base.go:246] could not sync claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I0912 17:27:56.863775  111116 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision", version 59024
I0912 17:27:56.863802  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:27:56.863829  111116 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: no volume found
I0912 17:27:56.863876  111116 pv_controller.go:683] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] status: set phase Pending
I0912 17:27:56.863899  111116 pv_controller.go:728] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] status: phase Pending already set
I0912 17:27:56.863946  111116 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306", Name:"pvc-canprovision", UID:"68e41193-cb7a-4f87-a0ba-6e0f251515d3", APIVersion:"v1", ResourceVersion:"59024", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0912 17:27:56.864980  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods: (1.858092ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42026]
I0912 17:27:56.865376  111116 scheduling_queue.go:830] About to try and schedule pod volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-pv-prebound-w-provisioned
I0912 17:27:56.865391  111116 scheduler.go:530] Attempting to schedule pod: volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-pv-prebound-w-provisioned
E0912 17:27:56.865575  111116 factory.go:557] Error scheduling volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-pv-prebound-w-provisioned: pod has unbound immediate PersistentVolumeClaims; retrying
I0912 17:27:56.865602  111116 factory.go:615] Updating pod condition for volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-pv-prebound-w-provisioned to (PodScheduled==False, Reason=Unschedulable)
I0912 17:27:56.865994  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/events: (1.731769ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42014]
I0912 17:27:56.867082  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.152091ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:56.869071  111116 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/events: (2.788183ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42014]
I0912 17:27:56.870180  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned/status: (4.35543ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42026]
E0912 17:27:56.870492  111116 scheduler.go:559] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
I0912 17:27:56.968440  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.873962ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:57.068736  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (2.801501ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:57.167785  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.916127ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:57.267472  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.634631ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:57.367732  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.873196ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:57.468175  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (2.27319ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:57.567366  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.617821ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:57.671033  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (5.259458ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:57.767847  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (2.027707ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:57.867754  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.9787ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:57.967833  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.997116ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:58.067808  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.90574ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:58.167300  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.549715ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:58.267419  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.586727ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:58.367615  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.761736ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:58.467341  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.53453ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:58.567305  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.496796ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:58.667433  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.666889ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:58.767488  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.672692ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:58.867382  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.567077ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:58.967517  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.691115ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:59.067491  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.705342ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:59.167390  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.610783ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:59.267127  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.368053ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:59.367126  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.362501ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:59.467330  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.567885ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:59.567627  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.787564ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:59.667425  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.615988ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:59.767387  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.56616ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:59.867553  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.839046ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:27:59.967616  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.73771ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:00.067710  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.898239ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:00.167539  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.700335ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:00.267574  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.828779ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:00.367362  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.53363ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:00.467569  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.805962ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:00.567348  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.550156ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:00.667570  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.753331ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:00.767396  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.605306ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:00.867310  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.515971ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:00.967444  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.551536ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:01.067747  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.877343ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:01.167676  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.773575ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:01.267378  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.569794ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:01.367538  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.672876ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:01.467621  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.814169ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:01.567371  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.587684ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:01.667507  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.731306ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:01.767393  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.600338ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:01.867503  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.65549ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:01.967270  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.427296ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:02.072995  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (7.110593ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:02.167643  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.82306ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:02.267529  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.718975ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:02.367685  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.71712ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:02.467266  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.491967ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:02.567508  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.712131ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:02.667563  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.754045ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:02.767469  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.683206ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:02.867452  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.678988ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:02.967606  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.761208ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:03.067561  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.66158ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:03.167508  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.673531ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:03.267688  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.841168ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:03.367689  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.897685ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:03.467414  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.605611ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:03.567428  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.579836ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:03.667647  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.864016ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:03.767293  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.469788ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:03.867843  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.968269ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:03.967595  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.733871ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:04.067412  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.60315ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:04.167593  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.708951ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:04.267164  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.3488ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:04.367523  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.71117ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:04.467496  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.662763ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:04.567513  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.611093ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:04.667281  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.522886ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:04.767601  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.687424ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:04.867405  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.551094ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:04.967324  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.510409ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:05.067344  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.523692ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:05.167396  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.593332ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:05.267789  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.893582ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:05.367650  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.782916ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:05.386050  111116 httplog.go:90] GET /api/v1/namespaces/default: (1.447413ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:05.387739  111116 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.245037ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:05.389278  111116 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.070915ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:05.467631  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.721739ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:05.567784  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.770699ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:05.667781  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.934122ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:05.767736  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.835075ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:05.867395  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.521658ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:05.967593  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.690892ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:06.067502  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.675494ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:06.167449  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.643199ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:06.267853  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.795551ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:06.367347  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.536029ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:06.467478  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.647526ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:06.567611  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.747291ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:06.667588  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.808228ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:06.767576  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.73425ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:06.867759  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.77316ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:06.967509  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.618088ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:07.068245  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (2.378913ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:07.167628  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.832781ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:07.267542  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.670556ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:07.367661  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.780511ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:07.468015  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (2.096394ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:07.567669  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.802271ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:07.667858  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.98344ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:07.767724  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.822624ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:07.867438  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.589928ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:07.967395  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.575285ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:08.067624  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.79044ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:08.167418  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.551123ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:08.267704  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.788162ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:08.367808  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.882871ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:08.467448  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.628726ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:08.567527  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.685368ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:08.667647  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.753196ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:08.767342  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.541425ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:08.867504  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.636496ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:08.967459  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.577982ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:09.067459  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.605904ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:09.167556  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.712366ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:09.267414  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.580708ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:09.367481  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.621736ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:09.467362  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.485606ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:09.567533  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.685076ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:09.667411  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.57959ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:09.767604  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.78095ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:09.867544  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.72398ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:09.967631  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.74479ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:10.067484  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.39908ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:10.167512  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.650352ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:10.267620  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.695342ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:10.367632  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.698609ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:10.467531  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.649036ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:10.567485  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.639524ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:10.667481  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.627908ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:10.675799  111116 pv_controller_base.go:419] resyncing PV controller
I0912 17:28:10.675889  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 59023
I0912 17:28:10.675945  111116 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound (uid: )", boundByController: false
I0912 17:28:10.675959  111116 pv_controller.go:506] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound
I0912 17:28:10.675967  111116 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Available
I0912 17:28:10.675977  111116 pv_controller.go:780] updating PersistentVolume[pv-i-prebound]: phase Available already set
I0912 17:28:10.676003  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound" with version 59022
I0912 17:28:10.676019  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:10.676051  111116 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Available, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound (uid: )", boundByController: false
I0912 17:28:10.676071  111116 pv_controller.go:931] binding volume "pv-i-prebound" to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound"
I0912 17:28:10.676084  111116 pv_controller.go:829] updating PersistentVolume[pv-i-prebound]: binding to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound"
I0912 17:28:10.676113  111116 pv_controller.go:849] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound" bound to volume "pv-i-prebound"
I0912 17:28:10.678442  111116 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound: (2.034131ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:10.678662  111116 scheduling_queue.go:830] About to try and schedule pod volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-pv-prebound-w-provisioned
I0912 17:28:10.678688  111116 scheduler.go:530] Attempting to schedule pod: volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-pv-prebound-w-provisioned
I0912 17:28:10.678812  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 59195
I0912 17:28:10.678840  111116 pv_controller.go:862] updating PersistentVolume[pv-i-prebound]: bound to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound"
I0912 17:28:10.678848  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 59195
I0912 17:28:10.678849  111116 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Bound
I0912 17:28:10.678876  111116 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound (uid: fd80b050-59f9-4297-869c-71ae8e718090)", boundByController: false
I0912 17:28:10.678888  111116 pv_controller.go:514] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound
I0912 17:28:10.678899  111116 pv_controller.go:555] synchronizing PersistentVolume[pv-i-prebound]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:10.678910  111116 pv_controller.go:606] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
E0912 17:28:10.679017  111116 factory.go:557] Error scheduling volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-pv-prebound-w-provisioned: pod has unbound immediate PersistentVolumeClaims; retrying
I0912 17:28:10.679048  111116 factory.go:615] Updating pod condition for volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-pv-prebound-w-provisioned to (PodScheduled==False, Reason=Unschedulable)
E0912 17:28:10.679061  111116 scheduler.go:559] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
I0912 17:28:10.680876  111116 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (1.61915ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:10.681002  111116 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/events: (1.663251ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:10.681154  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.562164ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42338]
I0912 17:28:10.681330  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 59196
I0912 17:28:10.681358  111116 pv_controller.go:798] volume "pv-i-prebound" entered phase "Bound"
I0912 17:28:10.681387  111116 pv_controller.go:869] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I0912 17:28:10.681402  111116 pv_controller.go:901] volume "pv-i-prebound" bound to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound"
I0912 17:28:10.681533  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 59196
I0912 17:28:10.681699  111116 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound (uid: fd80b050-59f9-4297-869c-71ae8e718090)", boundByController: false
I0912 17:28:10.681723  111116 pv_controller.go:514] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound
I0912 17:28:10.681736  111116 pv_controller.go:555] synchronizing PersistentVolume[pv-i-prebound]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:10.681747  111116 pv_controller.go:606] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0912 17:28:10.683233  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-i-pv-prebound: (1.623595ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:10.683547  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound" with version 59198
I0912 17:28:10.683582  111116 pv_controller.go:912] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound]: bound to "pv-i-prebound"
I0912 17:28:10.683593  111116 pv_controller.go:683] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound] status: set phase Bound
I0912 17:28:10.685067  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-i-pv-prebound/status: (1.279395ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:10.685274  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound" with version 59199
I0912 17:28:10.685300  111116 pv_controller.go:742] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound" entered phase "Bound"
I0912 17:28:10.685313  111116 pv_controller.go:957] volume "pv-i-prebound" bound to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound"
I0912 17:28:10.685328  111116 pv_controller.go:958] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound (uid: fd80b050-59f9-4297-869c-71ae8e718090)", boundByController: false
I0912 17:28:10.685340  111116 pv_controller.go:959] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I0912 17:28:10.685366  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 59024
I0912 17:28:10.685374  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:10.685393  111116 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: no volume found
I0912 17:28:10.685409  111116 pv_controller.go:683] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] status: set phase Pending
I0912 17:28:10.685420  111116 pv_controller.go:728] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] status: phase Pending already set
I0912 17:28:10.685430  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound" with version 59199
I0912 17:28:10.685438  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound]: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I0912 17:28:10.685448  111116 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound (uid: fd80b050-59f9-4297-869c-71ae8e718090)", boundByController: false
I0912 17:28:10.685459  111116 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound]: claim is already correctly bound
I0912 17:28:10.685466  111116 pv_controller.go:931] binding volume "pv-i-prebound" to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound"
I0912 17:28:10.685475  111116 pv_controller.go:829] updating PersistentVolume[pv-i-prebound]: binding to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound"
I0912 17:28:10.685485  111116 pv_controller.go:841] updating PersistentVolume[pv-i-prebound]: already bound to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound"
I0912 17:28:10.685491  111116 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Bound
I0912 17:28:10.685497  111116 pv_controller.go:780] updating PersistentVolume[pv-i-prebound]: phase Bound already set
I0912 17:28:10.685502  111116 pv_controller.go:869] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I0912 17:28:10.685513  111116 pv_controller.go:916] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound]: already bound to "pv-i-prebound"
I0912 17:28:10.685518  111116 pv_controller.go:683] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound] status: set phase Bound
I0912 17:28:10.685530  111116 pv_controller.go:728] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound] status: phase Bound already set
I0912 17:28:10.685537  111116 pv_controller.go:957] volume "pv-i-prebound" bound to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound"
I0912 17:28:10.685548  111116 pv_controller.go:958] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound (uid: fd80b050-59f9-4297-869c-71ae8e718090)", boundByController: false
I0912 17:28:10.685561  111116 pv_controller.go:959] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I0912 17:28:10.685809  111116 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306", Name:"pvc-canprovision", UID:"68e41193-cb7a-4f87-a0ba-6e0f251515d3", APIVersion:"v1", ResourceVersion:"59024", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0912 17:28:10.687401  111116 httplog.go:90] PATCH /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/events/pvc-canprovision.15c3c0fa616b54da: (1.539931ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:10.767647  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.674977ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:10.867480  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.619152ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:10.967400  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.620405ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:11.067408  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.581762ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:11.167608  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.68871ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:11.267352  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.578218ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:11.367371  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.546484ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:11.467833  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.919536ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:11.567690  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.79702ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:11.667358  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.57843ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:11.767559  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.634441ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:11.867800  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.863767ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:11.967551  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.748703ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:12.068871  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.858182ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:12.167656  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.783785ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:12.267465  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.63447ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:12.367421  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.575419ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:12.467587  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.769182ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:12.475635  111116 scheduling_queue.go:830] About to try and schedule pod volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-pv-prebound-w-provisioned
I0912 17:28:12.475667  111116 scheduler.go:530] Attempting to schedule pod: volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-pv-prebound-w-provisioned
I0912 17:28:12.475859  111116 scheduler_binder.go:652] All bound volumes for Pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-pv-prebound-w-provisioned" match with Node "node-1"
I0912 17:28:12.475887  111116 scheduler_binder.go:679] No matching volumes for Pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-pv-prebound-w-provisioned", PVC "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" on node "node-1"
I0912 17:28:12.475905  111116 scheduler_binder.go:734] Provisioning for claims of pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-pv-prebound-w-provisioned" that has no matching volumes on node "node-1" ...
I0912 17:28:12.476008  111116 scheduler_binder.go:257] AssumePodVolumes for pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-pv-prebound-w-provisioned", node "node-1"
I0912 17:28:12.476044  111116 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision", version 59024
I0912 17:28:12.476099  111116 scheduler_binder.go:332] BindPodVolumes for pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-pv-prebound-w-provisioned", node "node-1"
I0912 17:28:12.478378  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-canprovision: (1.942873ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:12.478563  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 59201
I0912 17:28:12.478642  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:12.478672  111116 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: no volume found
I0912 17:28:12.478748  111116 pv_controller.go:1326] provisionClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: started
I0912 17:28:12.478798  111116 pv_controller.go:1631] scheduleOperation[provision-volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision[68e41193-cb7a-4f87-a0ba-6e0f251515d3]]
I0912 17:28:12.478882  111116 pv_controller.go:1372] provisionClaimOperation [volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] started, class: "wait-m6gz"
I0912 17:28:12.480847  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-canprovision: (1.624516ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:12.481116  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 59202
I0912 17:28:12.481137  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:12.481153  111116 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: no volume found
I0912 17:28:12.481159  111116 pv_controller.go:1326] provisionClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: started
I0912 17:28:12.481168  111116 pv_controller.go:1631] scheduleOperation[provision-volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision[68e41193-cb7a-4f87-a0ba-6e0f251515d3]]
I0912 17:28:12.481173  111116 pv_controller.go:1642] operation "provision-volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision[68e41193-cb7a-4f87-a0ba-6e0f251515d3]" is already running, skipping
I0912 17:28:12.481190  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 59202
I0912 17:28:12.482266  111116 httplog.go:90] GET /api/v1/persistentvolumes/pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3: (892.361µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:12.482467  111116 pv_controller.go:1476] volume "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3" for claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" created
I0912 17:28:12.482494  111116 pv_controller.go:1493] provisionClaimOperation [volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: trying to save volume pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3
I0912 17:28:12.484217  111116 httplog.go:90] POST /api/v1/persistentvolumes: (1.543168ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:12.484993  111116 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3", version 59203
I0912 17:28:12.485044  111116 pv_controller.go:489] synchronizing PersistentVolume[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: phase: Pending, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: 68e41193-cb7a-4f87-a0ba-6e0f251515d3)", boundByController: true
I0912 17:28:12.485059  111116 pv_controller.go:514] synchronizing PersistentVolume[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision
I0912 17:28:12.485079  111116 pv_controller.go:555] synchronizing PersistentVolume[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:12.485097  111116 pv_controller.go:603] synchronizing PersistentVolume[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: volume not bound yet, waiting for syncClaim to fix it
I0912 17:28:12.485128  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 59202
I0912 17:28:12.485152  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:12.485179  111116 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: volume "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3" found: phase: Pending, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: 68e41193-cb7a-4f87-a0ba-6e0f251515d3)", boundByController: true
I0912 17:28:12.485194  111116 pv_controller.go:931] binding volume "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3" to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:12.485208  111116 pv_controller.go:829] updating PersistentVolume[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: binding to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:12.485226  111116 pv_controller.go:841] updating PersistentVolume[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: already bound to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:12.485238  111116 pv_controller.go:777] updating PersistentVolume[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: set phase Bound
I0912 17:28:12.485294  111116 pv_controller.go:1501] volume "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3" for claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" saved
I0912 17:28:12.485405  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3" with version 59203
I0912 17:28:12.485475  111116 pv_controller.go:1554] volume "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3" provisioned for claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:12.485583  111116 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306", Name:"pvc-canprovision", UID:"68e41193-cb7a-4f87-a0ba-6e0f251515d3", APIVersion:"v1", ResourceVersion:"59202", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3 using kubernetes.io/mock-provisioner
I0912 17:28:12.487438  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/events: (1.554895ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:12.487564  111116 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3/status: (1.793214ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:12.487631  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3" with version 59204
I0912 17:28:12.487671  111116 pv_controller.go:489] synchronizing PersistentVolume[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: 68e41193-cb7a-4f87-a0ba-6e0f251515d3)", boundByController: true
I0912 17:28:12.487682  111116 pv_controller.go:514] synchronizing PersistentVolume[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision
I0912 17:28:12.487704  111116 pv_controller.go:555] synchronizing PersistentVolume[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:12.487723  111116 pv_controller.go:603] synchronizing PersistentVolume[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: volume not bound yet, waiting for syncClaim to fix it
I0912 17:28:12.487781  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3" with version 59204
I0912 17:28:12.487805  111116 pv_controller.go:798] volume "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3" entered phase "Bound"
I0912 17:28:12.487819  111116 pv_controller.go:869] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: binding to "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3"
I0912 17:28:12.487836  111116 pv_controller.go:901] volume "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3" bound to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:12.489557  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-canprovision: (1.423557ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:12.490354  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 59206
I0912 17:28:12.490391  111116 pv_controller.go:912] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: bound to "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3"
I0912 17:28:12.490404  111116 pv_controller.go:683] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] status: set phase Bound
I0912 17:28:12.492177  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-canprovision/status: (1.568471ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:12.492334  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 59207
I0912 17:28:12.492353  111116 pv_controller.go:742] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" entered phase "Bound"
I0912 17:28:12.492366  111116 pv_controller.go:957] volume "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3" bound to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:12.492386  111116 pv_controller.go:958] volume "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3" status after binding: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: 68e41193-cb7a-4f87-a0ba-6e0f251515d3)", boundByController: true
I0912 17:28:12.492397  111116 pv_controller.go:959] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3", bindCompleted: true, boundByController: true
I0912 17:28:12.492425  111116 pv_controller_base.go:526] storeObjectUpdate: ignoring claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" version 59206
I0912 17:28:12.492698  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 59207
I0912 17:28:12.492822  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: phase: Bound, bound to: "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3", bindCompleted: true, boundByController: true
I0912 17:28:12.492908  111116 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: volume "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3" found: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: 68e41193-cb7a-4f87-a0ba-6e0f251515d3)", boundByController: true
I0912 17:28:12.493000  111116 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: claim is already correctly bound
I0912 17:28:12.493082  111116 pv_controller.go:931] binding volume "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3" to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:12.493148  111116 pv_controller.go:829] updating PersistentVolume[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: binding to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:12.493222  111116 pv_controller.go:841] updating PersistentVolume[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: already bound to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:12.493285  111116 pv_controller.go:777] updating PersistentVolume[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: set phase Bound
I0912 17:28:12.493351  111116 pv_controller.go:780] updating PersistentVolume[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: phase Bound already set
I0912 17:28:12.493408  111116 pv_controller.go:869] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: binding to "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3"
I0912 17:28:12.493485  111116 pv_controller.go:916] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: already bound to "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3"
I0912 17:28:12.493542  111116 pv_controller.go:683] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] status: set phase Bound
I0912 17:28:12.493618  111116 pv_controller.go:728] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] status: phase Bound already set
I0912 17:28:12.493681  111116 pv_controller.go:957] volume "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3" bound to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:12.493753  111116 pv_controller.go:958] volume "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3" status after binding: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: 68e41193-cb7a-4f87-a0ba-6e0f251515d3)", boundByController: true
I0912 17:28:12.493823  111116 pv_controller.go:959] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3", bindCompleted: true, boundByController: true
I0912 17:28:12.567520  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.654683ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:12.667586  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.703581ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:12.767466  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.634359ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:12.867615  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.71546ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:12.967664  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.796566ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:13.067482  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.623518ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:13.167463  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.598496ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:13.267876  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (2.006022ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:13.367615  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.737874ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:13.467658  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.732257ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:13.475711  111116 cache.go:669] Couldn't expire cache for pod volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-pv-prebound-w-provisioned. Binding is still in progress.
I0912 17:28:13.478957  111116 scheduler_binder.go:546] All PVCs for pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-pv-prebound-w-provisioned" are bound
I0912 17:28:13.479023  111116 factory.go:606] Attempting to bind pod-i-pv-prebound-w-provisioned to node-1
I0912 17:28:13.482783  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned/binding: (3.387847ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:13.483133  111116 scheduler.go:662] pod volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-pv-prebound-w-provisioned is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0912 17:28:13.485277  111116 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/events: (1.75839ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:13.567345  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-pv-prebound-w-provisioned: (1.464869ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:13.569188  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-i-pv-prebound: (1.172363ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:13.570679  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-canprovision: (1.04285ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:13.572397  111116 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-prebound: (1.207645ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:13.577879  111116 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods: (5.000345ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:13.582092  111116 pv_controller_base.go:258] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" deleted
I0912 17:28:13.582134  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3" with version 59204
I0912 17:28:13.582165  111116 pv_controller.go:489] synchronizing PersistentVolume[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: 68e41193-cb7a-4f87-a0ba-6e0f251515d3)", boundByController: true
I0912 17:28:13.582175  111116 pv_controller.go:514] synchronizing PersistentVolume[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision
I0912 17:28:13.583683  111116 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims: (5.350357ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:13.583910  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-canprovision: (1.241983ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:13.583946  111116 pv_controller_base.go:258] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound" deleted
I0912 17:28:13.584135  111116 pv_controller.go:547] synchronizing PersistentVolume[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision not found
I0912 17:28:13.584159  111116 pv_controller.go:575] volume "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3" is released and reclaim policy "Delete" will be executed
I0912 17:28:13.584169  111116 pv_controller.go:777] updating PersistentVolume[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: set phase Released
I0912 17:28:13.585981  111116 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3/status: (1.615053ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:13.586216  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3" with version 59214
I0912 17:28:13.586250  111116 pv_controller.go:798] volume "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3" entered phase "Released"
I0912 17:28:13.586263  111116 pv_controller.go:1022] reclaimVolume[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: policy is Delete
I0912 17:28:13.586285  111116 pv_controller.go:1631] scheduleOperation[delete-pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3[78c2a390-5e49-40a3-b2a9-c3986d36d2d8]]
I0912 17:28:13.586311  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 59196
I0912 17:28:13.586331  111116 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound (uid: fd80b050-59f9-4297-869c-71ae8e718090)", boundByController: false
I0912 17:28:13.586338  111116 pv_controller.go:514] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound
I0912 17:28:13.586354  111116 pv_controller.go:547] synchronizing PersistentVolume[pv-i-prebound]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound not found
I0912 17:28:13.586363  111116 pv_controller.go:575] volume "pv-i-prebound" is released and reclaim policy "Retain" will be executed
I0912 17:28:13.586369  111116 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Released
I0912 17:28:13.586495  111116 pv_controller.go:1146] deleteVolumeOperation [pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3] started
I0912 17:28:13.588280  111116 store.go:228] deletion of /786db7ea-de2d-4c3a-a56f-63266d05494a/persistentvolumes/pv-i-prebound failed because of a conflict, going to retry
I0912 17:28:13.588511  111116 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (1.94237ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:13.588708  111116 httplog.go:90] GET /api/v1/persistentvolumes/pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3: (1.82454ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42372]
I0912 17:28:13.588732  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 59215
I0912 17:28:13.588752  111116 pv_controller.go:798] volume "pv-i-prebound" entered phase "Released"
I0912 17:28:13.588763  111116 pv_controller.go:1011] reclaimVolume[pv-i-prebound]: policy is Retain, nothing to do
I0912 17:28:13.588783  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3" with version 59214
I0912 17:28:13.588808  111116 pv_controller.go:489] synchronizing PersistentVolume[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: phase: Released, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: 68e41193-cb7a-4f87-a0ba-6e0f251515d3)", boundByController: true
I0912 17:28:13.588837  111116 pv_controller.go:514] synchronizing PersistentVolume[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision
I0912 17:28:13.588859  111116 pv_controller.go:547] synchronizing PersistentVolume[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision not found
I0912 17:28:13.588866  111116 pv_controller.go:1022] reclaimVolume[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: policy is Delete
I0912 17:28:13.588871  111116 pv_controller.go:1250] isVolumeReleased[pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: volume is released
I0912 17:28:13.588879  111116 pv_controller.go:1285] doDeleteVolume [pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]
I0912 17:28:13.588885  111116 pv_controller.go:1631] scheduleOperation[delete-pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3[78c2a390-5e49-40a3-b2a9-c3986d36d2d8]]
I0912 17:28:13.588893  111116 pv_controller.go:1642] operation "delete-pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3[78c2a390-5e49-40a3-b2a9-c3986d36d2d8]" is already running, skipping
I0912 17:28:13.588904  111116 pv_controller.go:1316] volume "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3" deleted
I0912 17:28:13.588910  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 59215
I0912 17:28:13.588912  111116 pv_controller.go:1193] deleteVolumeOperation [pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3]: success
I0912 17:28:13.588994  111116 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Released, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound (uid: fd80b050-59f9-4297-869c-71ae8e718090)", boundByController: false
I0912 17:28:13.589016  111116 pv_controller.go:514] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound
I0912 17:28:13.589040  111116 pv_controller.go:547] synchronizing PersistentVolume[pv-i-prebound]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound not found
I0912 17:28:13.589047  111116 pv_controller.go:1011] reclaimVolume[pv-i-prebound]: policy is Retain, nothing to do
I0912 17:28:13.590245  111116 pv_controller_base.go:212] volume "pv-i-prebound" deleted
I0912 17:28:13.590291  111116 pv_controller_base.go:396] deletion of claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-i-pv-prebound" was already processed
I0912 17:28:13.592036  111116 httplog.go:90] DELETE /api/v1/persistentvolumes/pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3: (2.950687ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42372]
I0912 17:28:13.592117  111116 store.go:228] deletion of /786db7ea-de2d-4c3a-a56f-63266d05494a/persistentvolumes/pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3 failed because of a conflict, going to retry
I0912 17:28:13.592339  111116 httplog.go:90] DELETE /api/v1/persistentvolumes: (8.250953ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0912 17:28:13.592410  111116 pv_controller_base.go:212] volume "pvc-68e41193-cb7a-4f87-a0ba-6e0f251515d3" deleted
I0912 17:28:13.592451  111116 pv_controller_base.go:396] deletion of claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" was already processed
I0912 17:28:13.600279  111116 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (7.549018ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42372]
I0912 17:28:13.600509  111116 volume_binding_test.go:751] Running test wait one pv prebound, one provisioned
I0912 17:28:13.601800  111116 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.07981ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42372]
I0912 17:28:13.603291  111116 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.136861ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42372]
I0912 17:28:13.604686  111116 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.07774ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42372]
I0912 17:28:13.606783  111116 httplog.go:90] POST /api/v1/persistentvolumes: (1.702528ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42372]
I0912 17:28:13.607152  111116 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-w-prebound", version 59224
I0912 17:28:13.607189  111116 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Pending, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound (uid: )", boundByController: false
I0912 17:28:13.607195  111116 pv_controller.go:506] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound
I0912 17:28:13.607207  111116 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Available
I0912 17:28:13.608416  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims: (1.161876ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42372]
I0912 17:28:13.608654  111116 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound", version 59225
I0912 17:28:13.608694  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:13.608719  111116 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Pending, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound (uid: )", boundByController: false
I0912 17:28:13.608731  111116 pv_controller.go:931] binding volume "pv-w-prebound" to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound"
I0912 17:28:13.608742  111116 pv_controller.go:829] updating PersistentVolume[pv-w-prebound]: binding to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound"
I0912 17:28:13.608757  111116 pv_controller.go:849] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I0912 17:28:13.609030  111116 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (1.608861ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:13.609297  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 59226
I0912 17:28:13.609319  111116 pv_controller.go:798] volume "pv-w-prebound" entered phase "Available"
I0912 17:28:13.609352  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 59226
I0912 17:28:13.609366  111116 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Available, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound (uid: )", boundByController: false
I0912 17:28:13.609376  111116 pv_controller.go:506] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound
I0912 17:28:13.609380  111116 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Available
I0912 17:28:13.609386  111116 pv_controller.go:780] updating PersistentVolume[pv-w-prebound]: phase Available already set
I0912 17:28:13.610051  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims: (1.293752ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42372]
I0912 17:28:13.610907  111116 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound: (1.66882ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:13.611180  111116 pv_controller.go:852] updating PersistentVolume[pv-w-prebound]: binding to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I0912 17:28:13.611211  111116 pv_controller.go:934] error binding volume "pv-w-prebound" to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I0912 17:28:13.611227  111116 pv_controller_base.go:246] could not sync claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I0912 17:28:13.611278  111116 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision", version 59227
I0912 17:28:13.611299  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:13.611324  111116 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: no volume found
I0912 17:28:13.611350  111116 pv_controller.go:683] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] status: set phase Pending
I0912 17:28:13.611367  111116 pv_controller.go:728] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] status: phase Pending already set
I0912 17:28:13.611550  111116 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306", Name:"pvc-canprovision", UID:"bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04", APIVersion:"v1", ResourceVersion:"59227", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0912 17:28:13.612176  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods: (1.726811ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42372]
I0912 17:28:13.612341  111116 scheduling_queue.go:830] About to try and schedule pod volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-w-pv-prebound-w-provisioned
I0912 17:28:13.612360  111116 scheduler.go:530] Attempting to schedule pod: volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-w-pv-prebound-w-provisioned
I0912 17:28:13.612534  111116 scheduler_binder.go:679] No matching volumes for Pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-w-pv-prebound-w-provisioned", PVC "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" on node "node-1"
I0912 17:28:13.612558  111116 scheduler_binder.go:734] Provisioning for claims of pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-w-pv-prebound-w-provisioned" that has no matching volumes on node "node-1" ...
I0912 17:28:13.612604  111116 scheduler_binder.go:257] AssumePodVolumes for pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-w-pv-prebound-w-provisioned", node "node-1"
I0912 17:28:13.612639  111116 scheduler_assume_cache.go:320] Assumed v1.PersistentVolume "pv-w-prebound", version 59226
I0912 17:28:13.612662  111116 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision", version 59227
I0912 17:28:13.612711  111116 scheduler_binder.go:332] BindPodVolumes for pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-w-pv-prebound-w-provisioned", node "node-1"
I0912 17:28:13.612735  111116 scheduler_binder.go:400] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I0912 17:28:13.613331  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/events: (1.746365ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:13.614333  111116 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound: (1.361502ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42372]
I0912 17:28:13.614450  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 59230
I0912 17:28:13.614488  111116 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Available, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound (uid: 0528fcbd-86df-45a9-8a61-d0b26af14abe)", boundByController: false
I0912 17:28:13.614499  111116 pv_controller.go:514] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound
I0912 17:28:13.614516  111116 pv_controller.go:555] synchronizing PersistentVolume[pv-w-prebound]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:13.614538  111116 pv_controller.go:606] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0912 17:28:13.614567  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound" with version 59225
I0912 17:28:13.614585  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:13.614598  111116 scheduler_binder.go:406] updating PersistentVolume[pv-w-prebound]: bound to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound"
I0912 17:28:13.614608  111116 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Available, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound (uid: 0528fcbd-86df-45a9-8a61-d0b26af14abe)", boundByController: false
I0912 17:28:13.614625  111116 pv_controller.go:931] binding volume "pv-w-prebound" to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound"
I0912 17:28:13.614635  111116 pv_controller.go:829] updating PersistentVolume[pv-w-prebound]: binding to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound"
I0912 17:28:13.614650  111116 pv_controller.go:841] updating PersistentVolume[pv-w-prebound]: already bound to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound"
I0912 17:28:13.614663  111116 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Bound
I0912 17:28:13.616362  111116 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (1.535182ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:13.616720  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 59232
I0912 17:28:13.616756  111116 pv_controller.go:798] volume "pv-w-prebound" entered phase "Bound"
I0912 17:28:13.616758  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 59232
I0912 17:28:13.616769  111116 pv_controller.go:869] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I0912 17:28:13.616784  111116 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound (uid: 0528fcbd-86df-45a9-8a61-d0b26af14abe)", boundByController: false
I0912 17:28:13.616785  111116 pv_controller.go:901] volume "pv-w-prebound" bound to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound"
I0912 17:28:13.616793  111116 pv_controller.go:514] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound
I0912 17:28:13.616804  111116 pv_controller.go:555] synchronizing PersistentVolume[pv-w-prebound]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:13.616829  111116 pv_controller.go:606] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0912 17:28:13.616877  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-canprovision: (2.082553ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:13.618210  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-w-pv-prebound: (1.251731ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:13.618405  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound" with version 59233
I0912 17:28:13.618440  111116 pv_controller.go:912] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound]: bound to "pv-w-prebound"
I0912 17:28:13.618449  111116 pv_controller.go:683] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound] status: set phase Bound
I0912 17:28:13.620017  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-w-pv-prebound/status: (1.324912ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:13.620198  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound" with version 59234
I0912 17:28:13.620224  111116 pv_controller.go:742] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound" entered phase "Bound"
I0912 17:28:13.620236  111116 pv_controller.go:957] volume "pv-w-prebound" bound to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound"
I0912 17:28:13.620256  111116 pv_controller.go:958] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound (uid: 0528fcbd-86df-45a9-8a61-d0b26af14abe)", boundByController: false
I0912 17:28:13.620278  111116 pv_controller.go:959] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I0912 17:28:13.620319  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 59231
I0912 17:28:13.620341  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:13.620362  111116 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: no volume found
I0912 17:28:13.620371  111116 pv_controller.go:1326] provisionClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: started
I0912 17:28:13.620393  111116 pv_controller.go:1631] scheduleOperation[provision-volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision[bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]]
I0912 17:28:13.620421  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound" with version 59234
I0912 17:28:13.620441  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound]: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I0912 17:28:13.620456  111116 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound (uid: 0528fcbd-86df-45a9-8a61-d0b26af14abe)", boundByController: false
I0912 17:28:13.620465  111116 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound]: claim is already correctly bound
I0912 17:28:13.620475  111116 pv_controller.go:931] binding volume "pv-w-prebound" to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound"
I0912 17:28:13.620484  111116 pv_controller.go:829] updating PersistentVolume[pv-w-prebound]: binding to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound"
I0912 17:28:13.620499  111116 pv_controller.go:841] updating PersistentVolume[pv-w-prebound]: already bound to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound"
I0912 17:28:13.620513  111116 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Bound
I0912 17:28:13.620522  111116 pv_controller.go:780] updating PersistentVolume[pv-w-prebound]: phase Bound already set
I0912 17:28:13.620531  111116 pv_controller.go:869] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I0912 17:28:13.620550  111116 pv_controller.go:916] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound]: already bound to "pv-w-prebound"
I0912 17:28:13.620565  111116 pv_controller.go:683] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound] status: set phase Bound
I0912 17:28:13.620584  111116 pv_controller.go:728] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound] status: phase Bound already set
I0912 17:28:13.620603  111116 pv_controller.go:957] volume "pv-w-prebound" bound to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound"
I0912 17:28:13.620621  111116 pv_controller.go:958] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound (uid: 0528fcbd-86df-45a9-8a61-d0b26af14abe)", boundByController: false
I0912 17:28:13.620639  111116 pv_controller.go:959] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I0912 17:28:13.620684  111116 pv_controller.go:1372] provisionClaimOperation [volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] started, class: "wait-7gtm"
I0912 17:28:13.622426  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-canprovision: (1.537086ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:13.622594  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 59235
I0912 17:28:13.622608  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 59235
I0912 17:28:13.622627  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:13.622649  111116 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: no volume found
I0912 17:28:13.622657  111116 pv_controller.go:1326] provisionClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: started
I0912 17:28:13.622670  111116 pv_controller.go:1631] scheduleOperation[provision-volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision[bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]]
I0912 17:28:13.622683  111116 pv_controller.go:1642] operation "provision-volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision[bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]" is already running, skipping
I0912 17:28:13.623620  111116 httplog.go:90] GET /api/v1/persistentvolumes/pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04: (818.494µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:13.623834  111116 pv_controller.go:1476] volume "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04" for claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" created
I0912 17:28:13.623860  111116 pv_controller.go:1493] provisionClaimOperation [volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: trying to save volume pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04
I0912 17:28:13.625319  111116 httplog.go:90] POST /api/v1/persistentvolumes: (1.258046ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:13.625531  111116 pv_controller.go:1501] volume "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04" for claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" saved
I0912 17:28:13.625569  111116 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04", version 59236
I0912 17:28:13.625586  111116 pv_controller.go:1554] volume "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04" provisioned for claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:13.625590  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04" with version 59236
I0912 17:28:13.625620  111116 pv_controller.go:489] synchronizing PersistentVolume[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: phase: Pending, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04)", boundByController: true
I0912 17:28:13.625648  111116 pv_controller.go:514] synchronizing PersistentVolume[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision
I0912 17:28:13.625644  111116 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306", Name:"pvc-canprovision", UID:"bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04", APIVersion:"v1", ResourceVersion:"59235", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04 using kubernetes.io/mock-provisioner
I0912 17:28:13.625664  111116 pv_controller.go:555] synchronizing PersistentVolume[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:13.625679  111116 pv_controller.go:603] synchronizing PersistentVolume[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: volume not bound yet, waiting for syncClaim to fix it
I0912 17:28:13.625721  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 59235
I0912 17:28:13.625743  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:13.625767  111116 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: volume "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04" found: phase: Pending, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04)", boundByController: true
I0912 17:28:13.625783  111116 pv_controller.go:931] binding volume "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04" to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:13.625804  111116 pv_controller.go:829] updating PersistentVolume[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: binding to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:13.625819  111116 pv_controller.go:841] updating PersistentVolume[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: already bound to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:13.625835  111116 pv_controller.go:777] updating PersistentVolume[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: set phase Bound
I0912 17:28:13.626957  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/events: (1.119593ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:13.627517  111116 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04/status: (1.457135ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:13.627627  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04" with version 59238
I0912 17:28:13.627659  111116 pv_controller.go:489] synchronizing PersistentVolume[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04)", boundByController: true
I0912 17:28:13.627684  111116 pv_controller.go:514] synchronizing PersistentVolume[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision
I0912 17:28:13.627696  111116 pv_controller.go:555] synchronizing PersistentVolume[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:13.627706  111116 pv_controller.go:603] synchronizing PersistentVolume[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: volume not bound yet, waiting for syncClaim to fix it
I0912 17:28:13.627785  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04" with version 59238
I0912 17:28:13.627801  111116 pv_controller.go:798] volume "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04" entered phase "Bound"
I0912 17:28:13.627809  111116 pv_controller.go:869] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: binding to "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04"
I0912 17:28:13.627821  111116 pv_controller.go:901] volume "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04" bound to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:13.629327  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-canprovision: (1.283006ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:13.629524  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 59239
I0912 17:28:13.629559  111116 pv_controller.go:912] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: bound to "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04"
I0912 17:28:13.629580  111116 pv_controller.go:683] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] status: set phase Bound
I0912 17:28:13.631010  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-canprovision/status: (1.246242ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:13.631222  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 59240
I0912 17:28:13.631248  111116 pv_controller.go:742] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" entered phase "Bound"
I0912 17:28:13.631260  111116 pv_controller.go:957] volume "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04" bound to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:13.631275  111116 pv_controller.go:958] volume "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04" status after binding: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04)", boundByController: true
I0912 17:28:13.631287  111116 pv_controller.go:959] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04", bindCompleted: true, boundByController: true
I0912 17:28:13.631314  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 59240
I0912 17:28:13.631333  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: phase: Bound, bound to: "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04", bindCompleted: true, boundByController: true
I0912 17:28:13.631344  111116 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: volume "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04" found: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04)", boundByController: true
I0912 17:28:13.631353  111116 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: claim is already correctly bound
I0912 17:28:13.631363  111116 pv_controller.go:931] binding volume "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04" to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:13.631369  111116 pv_controller.go:829] updating PersistentVolume[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: binding to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:13.631380  111116 pv_controller.go:841] updating PersistentVolume[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: already bound to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:13.631385  111116 pv_controller.go:777] updating PersistentVolume[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: set phase Bound
I0912 17:28:13.631391  111116 pv_controller.go:780] updating PersistentVolume[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: phase Bound already set
I0912 17:28:13.631396  111116 pv_controller.go:869] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: binding to "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04"
I0912 17:28:13.631408  111116 pv_controller.go:916] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: already bound to "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04"
I0912 17:28:13.631414  111116 pv_controller.go:683] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] status: set phase Bound
I0912 17:28:13.631425  111116 pv_controller.go:728] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] status: phase Bound already set
I0912 17:28:13.631432  111116 pv_controller.go:957] volume "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04" bound to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:13.631444  111116 pv_controller.go:958] volume "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04" status after binding: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04)", boundByController: true
I0912 17:28:13.631453  111116 pv_controller.go:959] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04", bindCompleted: true, boundByController: true
I0912 17:28:13.714726  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-w-pv-prebound-w-provisioned: (1.889129ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:13.814743  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-w-pv-prebound-w-provisioned: (1.843642ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:13.914476  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-w-pv-prebound-w-provisioned: (1.495988ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:14.014352  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-w-pv-prebound-w-provisioned: (1.564564ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:14.114404  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-w-pv-prebound-w-provisioned: (1.48287ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:14.214343  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-w-pv-prebound-w-provisioned: (1.468312ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:14.314523  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-w-pv-prebound-w-provisioned: (1.63812ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:14.414472  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-w-pv-prebound-w-provisioned: (1.639056ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:14.475892  111116 cache.go:669] Couldn't expire cache for pod volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-w-pv-prebound-w-provisioned. Binding is still in progress.
I0912 17:28:14.514520  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-w-pv-prebound-w-provisioned: (1.613664ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:14.614479  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-w-pv-prebound-w-provisioned: (1.684261ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:14.617307  111116 scheduler_binder.go:546] All PVCs for pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-w-pv-prebound-w-provisioned" are bound
I0912 17:28:14.617368  111116 factory.go:606] Attempting to bind pod-w-pv-prebound-w-provisioned to node-1
I0912 17:28:14.619474  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-w-pv-prebound-w-provisioned/binding: (1.834196ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:14.619802  111116 scheduler.go:662] pod volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-w-pv-prebound-w-provisioned is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0912 17:28:14.621638  111116 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/events: (1.495539ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:14.714530  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-w-pv-prebound-w-provisioned: (1.695749ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:14.716498  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-w-pv-prebound: (1.18467ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:14.717904  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-canprovision: (959.717µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:14.719236  111116 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-prebound: (875.332µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:14.723709  111116 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods: (4.037756ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:14.727163  111116 pv_controller_base.go:258] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" deleted
I0912 17:28:14.727214  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04" with version 59238
I0912 17:28:14.727250  111116 pv_controller.go:489] synchronizing PersistentVolume[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04)", boundByController: true
I0912 17:28:14.727264  111116 pv_controller.go:514] synchronizing PersistentVolume[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision
I0912 17:28:14.728538  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-canprovision: (811.898µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:14.728745  111116 pv_controller.go:547] synchronizing PersistentVolume[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision not found
I0912 17:28:14.728774  111116 pv_controller.go:575] volume "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04" is released and reclaim policy "Delete" will be executed
I0912 17:28:14.728786  111116 pv_controller.go:777] updating PersistentVolume[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: set phase Released
I0912 17:28:14.729175  111116 pv_controller_base.go:258] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound" deleted
I0912 17:28:14.729485  111116 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims: (5.389078ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:14.730615  111116 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04/status: (1.595297ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:14.730865  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04" with version 59247
I0912 17:28:14.730887  111116 pv_controller.go:798] volume "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04" entered phase "Released"
I0912 17:28:14.730896  111116 pv_controller.go:1022] reclaimVolume[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: policy is Delete
I0912 17:28:14.730997  111116 pv_controller.go:1631] scheduleOperation[delete-pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04[89726185-4a99-422c-b8c7-fb564a4fc146]]
I0912 17:28:14.731066  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 59232
I0912 17:28:14.731094  111116 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound (uid: 0528fcbd-86df-45a9-8a61-d0b26af14abe)", boundByController: false
I0912 17:28:14.731132  111116 pv_controller.go:514] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound
I0912 17:28:14.731142  111116 pv_controller.go:1146] deleteVolumeOperation [pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04] started
I0912 17:28:14.731156  111116 pv_controller.go:547] synchronizing PersistentVolume[pv-w-prebound]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound not found
I0912 17:28:14.731195  111116 pv_controller.go:575] volume "pv-w-prebound" is released and reclaim policy "Retain" will be executed
I0912 17:28:14.731274  111116 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Released
I0912 17:28:14.732318  111116 httplog.go:90] GET /api/v1/persistentvolumes/pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04: (853.457µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:14.733096  111116 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (1.359546ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42384]
I0912 17:28:14.733287  111116 store.go:228] deletion of /786db7ea-de2d-4c3a-a56f-63266d05494a/persistentvolumes/pv-w-prebound failed because of a conflict, going to retry
I0912 17:28:14.733309  111116 pv_controller.go:1250] isVolumeReleased[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: volume is released
I0912 17:28:14.733322  111116 pv_controller.go:1285] doDeleteVolume [pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]
I0912 17:28:14.733363  111116 pv_controller.go:1316] volume "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04" deleted
I0912 17:28:14.733473  111116 pv_controller.go:1193] deleteVolumeOperation [pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: success
I0912 17:28:14.734111  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 59248
I0912 17:28:14.734234  111116 pv_controller.go:798] volume "pv-w-prebound" entered phase "Released"
I0912 17:28:14.734982  111116 pv_controller.go:1011] reclaimVolume[pv-w-prebound]: policy is Retain, nothing to do
I0912 17:28:14.735104  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04" with version 59247
I0912 17:28:14.735211  111116 pv_controller.go:489] synchronizing PersistentVolume[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: phase: Released, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04)", boundByController: true
I0912 17:28:14.735260  111116 pv_controller.go:514] synchronizing PersistentVolume[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision
I0912 17:28:14.735310  111116 pv_controller.go:547] synchronizing PersistentVolume[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision not found
I0912 17:28:14.735353  111116 pv_controller.go:1022] reclaimVolume[pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04]: policy is Delete
I0912 17:28:14.735403  111116 pv_controller.go:1631] scheduleOperation[delete-pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04[89726185-4a99-422c-b8c7-fb564a4fc146]]
I0912 17:28:14.735448  111116 pv_controller.go:1642] operation "delete-pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04[89726185-4a99-422c-b8c7-fb564a4fc146]" is already running, skipping
I0912 17:28:14.735507  111116 pv_controller_base.go:212] volume "pv-w-prebound" deleted
I0912 17:28:14.735770  111116 pv_controller_base.go:339] deletion of volume "pv-w-prebound" was already processed
I0912 17:28:14.735797  111116 pv_controller_base.go:396] deletion of claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-pv-prebound" was already processed
I0912 17:28:14.737165  111116 pv_controller_base.go:212] volume "pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04" deleted
I0912 17:28:14.737211  111116 pv_controller_base.go:396] deletion of claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" was already processed
I0912 17:28:14.737286  111116 store.go:228] deletion of /786db7ea-de2d-4c3a-a56f-63266d05494a/persistentvolumes/pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04 failed because of a conflict, going to retry
I0912 17:28:14.737502  111116 httplog.go:90] DELETE /api/v1/persistentvolumes: (7.696071ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42374]
I0912 17:28:14.737597  111116 httplog.go:90] DELETE /api/v1/persistentvolumes/pvc-bb679ea0-c2bd-4ec3-b3e7-3d0ee2b18a04: (3.877069ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:14.744650  111116 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (6.727972ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:14.744804  111116 volume_binding_test.go:751] Running test immediate provisioned by controller
I0912 17:28:14.746095  111116 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.077229ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:14.747596  111116 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.177867ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:14.749025  111116 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.044409ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:14.750593  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims: (1.159606ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:14.750731  111116 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned", version 59257
I0912 17:28:14.750757  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:14.750773  111116 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned]: no volume found
I0912 17:28:14.750780  111116 pv_controller.go:1326] provisionClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned]: started
I0912 17:28:14.750791  111116 pv_controller.go:1631] scheduleOperation[provision-volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned[380af228-618f-4c30-af1b-a21807b1a552]]
I0912 17:28:14.750823  111116 pv_controller.go:1372] provisionClaimOperation [volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned] started, class: "immediate-nvvd"
I0912 17:28:14.752133  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-controller-provisioned: (1.111249ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:14.752226  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned" with version 59258
I0912 17:28:14.752250  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:14.752271  111116 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned]: no volume found
I0912 17:28:14.752279  111116 pv_controller.go:1326] provisionClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned]: started
I0912 17:28:14.752287  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned" with version 59258
I0912 17:28:14.752293  111116 pv_controller.go:1631] scheduleOperation[provision-volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned[380af228-618f-4c30-af1b-a21807b1a552]]
I0912 17:28:14.752305  111116 pv_controller.go:1642] operation "provision-volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned[380af228-618f-4c30-af1b-a21807b1a552]" is already running, skipping
I0912 17:28:14.752375  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods: (1.315363ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42384]
I0912 17:28:14.753050  111116 scheduling_queue.go:830] About to try and schedule pod volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-unbound
I0912 17:28:14.753069  111116 scheduler.go:530] Attempting to schedule pod: volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-unbound
I0912 17:28:14.753102  111116 httplog.go:90] GET /api/v1/persistentvolumes/pvc-380af228-618f-4c30-af1b-a21807b1a552: (708.737µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:14.753354  111116 pv_controller.go:1476] volume "pvc-380af228-618f-4c30-af1b-a21807b1a552" for claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned" created
I0912 17:28:14.753375  111116 pv_controller.go:1493] provisionClaimOperation [volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned]: trying to save volume pvc-380af228-618f-4c30-af1b-a21807b1a552
E0912 17:28:14.753491  111116 factory.go:557] Error scheduling volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-unbound: pod has unbound immediate PersistentVolumeClaims; retrying
I0912 17:28:14.753601  111116 factory.go:615] Updating pod condition for volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-unbound to (PodScheduled==False, Reason=Unschedulable)
I0912 17:28:14.754569  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-unbound: (834.994µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42384]
I0912 17:28:14.754628  111116 httplog.go:90] POST /api/v1/persistentvolumes: (1.111773ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:14.754908  111116 pv_controller.go:1501] volume "pvc-380af228-618f-4c30-af1b-a21807b1a552" for claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned" saved
I0912 17:28:14.754977  111116 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-380af228-618f-4c30-af1b-a21807b1a552", version 59260
I0912 17:28:14.755013  111116 pv_controller.go:1554] volume "pvc-380af228-618f-4c30-af1b-a21807b1a552" provisioned for claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned"
I0912 17:28:14.755088  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-380af228-618f-4c30-af1b-a21807b1a552" with version 59260
I0912 17:28:14.755078  111116 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306", Name:"pvc-controller-provisioned", UID:"380af228-618f-4c30-af1b-a21807b1a552", APIVersion:"v1", ResourceVersion:"59258", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-380af228-618f-4c30-af1b-a21807b1a552 using kubernetes.io/mock-provisioner
I0912 17:28:14.755136  111116 pv_controller.go:489] synchronizing PersistentVolume[pvc-380af228-618f-4c30-af1b-a21807b1a552]: phase: Pending, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned (uid: 380af228-618f-4c30-af1b-a21807b1a552)", boundByController: true
I0912 17:28:14.755211  111116 pv_controller.go:514] synchronizing PersistentVolume[pvc-380af228-618f-4c30-af1b-a21807b1a552]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned
I0912 17:28:14.755301  111116 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/events: (1.15145ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42386]
I0912 17:28:14.755443  111116 pv_controller.go:555] synchronizing PersistentVolume[pvc-380af228-618f-4c30-af1b-a21807b1a552]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:14.755474  111116 pv_controller.go:603] synchronizing PersistentVolume[pvc-380af228-618f-4c30-af1b-a21807b1a552]: volume not bound yet, waiting for syncClaim to fix it
I0912 17:28:14.755542  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-unbound/status: (1.329308ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42388]
I0912 17:28:14.755555  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned" with version 59258
I0912 17:28:14.755576  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:14.755646  111116 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned]: volume "pvc-380af228-618f-4c30-af1b-a21807b1a552" found: phase: Pending, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned (uid: 380af228-618f-4c30-af1b-a21807b1a552)", boundByController: true
I0912 17:28:14.755668  111116 pv_controller.go:931] binding volume "pvc-380af228-618f-4c30-af1b-a21807b1a552" to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned"
I0912 17:28:14.755680  111116 pv_controller.go:829] updating PersistentVolume[pvc-380af228-618f-4c30-af1b-a21807b1a552]: binding to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned"
E0912 17:28:14.755755  111116 scheduler.go:559] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
I0912 17:28:14.755750  111116 pv_controller.go:841] updating PersistentVolume[pvc-380af228-618f-4c30-af1b-a21807b1a552]: already bound to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned"
I0912 17:28:14.755842  111116 pv_controller.go:777] updating PersistentVolume[pvc-380af228-618f-4c30-af1b-a21807b1a552]: set phase Bound
I0912 17:28:14.755847  111116 scheduling_queue.go:830] About to try and schedule pod volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-unbound
I0912 17:28:14.755968  111116 scheduler.go:530] Attempting to schedule pod: volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-unbound
E0912 17:28:14.756101  111116 factory.go:557] Error scheduling volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-unbound: pod has unbound immediate PersistentVolumeClaims; retrying
I0912 17:28:14.756126  111116 factory.go:615] Updating pod condition for volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-unbound to (PodScheduled==False, Reason=Unschedulable)
E0912 17:28:14.756135  111116 scheduler.go:559] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
I0912 17:28:14.756610  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/events: (1.272804ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:14.757514  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-unbound: (907.01µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:14.757606  111116 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/events: (1.172065ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42384]
I0912 17:28:14.757623  111116 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-380af228-618f-4c30-af1b-a21807b1a552/status: (1.2462ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42386]
I0912 17:28:14.757854  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-380af228-618f-4c30-af1b-a21807b1a552" with version 59265
I0912 17:28:14.757886  111116 pv_controller.go:798] volume "pvc-380af228-618f-4c30-af1b-a21807b1a552" entered phase "Bound"
I0912 17:28:14.757894  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-380af228-618f-4c30-af1b-a21807b1a552" with version 59265
I0912 17:28:14.757901  111116 pv_controller.go:869] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned]: binding to "pvc-380af228-618f-4c30-af1b-a21807b1a552"
I0912 17:28:14.757944  111116 pv_controller.go:901] volume "pvc-380af228-618f-4c30-af1b-a21807b1a552" bound to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned"
I0912 17:28:14.757949  111116 pv_controller.go:489] synchronizing PersistentVolume[pvc-380af228-618f-4c30-af1b-a21807b1a552]: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned (uid: 380af228-618f-4c30-af1b-a21807b1a552)", boundByController: true
I0912 17:28:14.757964  111116 pv_controller.go:514] synchronizing PersistentVolume[pvc-380af228-618f-4c30-af1b-a21807b1a552]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned
I0912 17:28:14.757978  111116 pv_controller.go:555] synchronizing PersistentVolume[pvc-380af228-618f-4c30-af1b-a21807b1a552]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:14.757990  111116 pv_controller.go:603] synchronizing PersistentVolume[pvc-380af228-618f-4c30-af1b-a21807b1a552]: volume not bound yet, waiting for syncClaim to fix it
I0912 17:28:14.759448  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-controller-provisioned: (1.283416ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:14.759645  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned" with version 59266
I0912 17:28:14.759701  111116 pv_controller.go:912] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned]: bound to "pvc-380af228-618f-4c30-af1b-a21807b1a552"
I0912 17:28:14.759713  111116 pv_controller.go:683] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned] status: set phase Bound
I0912 17:28:14.761207  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-controller-provisioned/status: (1.335856ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:14.761630  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned" with version 59267
I0912 17:28:14.761660  111116 pv_controller.go:742] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned" entered phase "Bound"
I0912 17:28:14.761678  111116 pv_controller.go:957] volume "pvc-380af228-618f-4c30-af1b-a21807b1a552" bound to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned"
I0912 17:28:14.761707  111116 pv_controller.go:958] volume "pvc-380af228-618f-4c30-af1b-a21807b1a552" status after binding: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned (uid: 380af228-618f-4c30-af1b-a21807b1a552)", boundByController: true
I0912 17:28:14.761724  111116 pv_controller.go:959] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned" status after binding: phase: Bound, bound to: "pvc-380af228-618f-4c30-af1b-a21807b1a552", bindCompleted: true, boundByController: true
I0912 17:28:14.761762  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned" with version 59267
I0912 17:28:14.761789  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned]: phase: Bound, bound to: "pvc-380af228-618f-4c30-af1b-a21807b1a552", bindCompleted: true, boundByController: true
I0912 17:28:14.761807  111116 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned]: volume "pvc-380af228-618f-4c30-af1b-a21807b1a552" found: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned (uid: 380af228-618f-4c30-af1b-a21807b1a552)", boundByController: true
I0912 17:28:14.761818  111116 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned]: claim is already correctly bound
I0912 17:28:14.761828  111116 pv_controller.go:931] binding volume "pvc-380af228-618f-4c30-af1b-a21807b1a552" to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned"
I0912 17:28:14.761884  111116 pv_controller.go:829] updating PersistentVolume[pvc-380af228-618f-4c30-af1b-a21807b1a552]: binding to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned"
I0912 17:28:14.761904  111116 pv_controller.go:841] updating PersistentVolume[pvc-380af228-618f-4c30-af1b-a21807b1a552]: already bound to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned"
I0912 17:28:14.761913  111116 pv_controller.go:777] updating PersistentVolume[pvc-380af228-618f-4c30-af1b-a21807b1a552]: set phase Bound
I0912 17:28:14.761937  111116 pv_controller.go:780] updating PersistentVolume[pvc-380af228-618f-4c30-af1b-a21807b1a552]: phase Bound already set
I0912 17:28:14.761946  111116 pv_controller.go:869] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned]: binding to "pvc-380af228-618f-4c30-af1b-a21807b1a552"
I0912 17:28:14.761966  111116 pv_controller.go:916] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned]: already bound to "pvc-380af228-618f-4c30-af1b-a21807b1a552"
I0912 17:28:14.761977  111116 pv_controller.go:683] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned] status: set phase Bound
I0912 17:28:14.762004  111116 pv_controller.go:728] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned] status: phase Bound already set
I0912 17:28:14.762021  111116 pv_controller.go:957] volume "pvc-380af228-618f-4c30-af1b-a21807b1a552" bound to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned"
I0912 17:28:14.762042  111116 pv_controller.go:958] volume "pvc-380af228-618f-4c30-af1b-a21807b1a552" status after binding: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned (uid: 380af228-618f-4c30-af1b-a21807b1a552)", boundByController: true
I0912 17:28:14.762064  111116 pv_controller.go:959] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned" status after binding: phase: Bound, bound to: "pvc-380af228-618f-4c30-af1b-a21807b1a552", bindCompleted: true, boundByController: true
I0912 17:28:14.854796  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-unbound: (1.678695ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:14.954838  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-unbound: (1.734215ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:15.054761  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-unbound: (1.615438ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:15.154912  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-unbound: (1.807852ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:15.254973  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-unbound: (1.863345ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:15.355089  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-unbound: (1.873637ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:15.386281  111116 httplog.go:90] GET /api/v1/namespaces/default: (1.52534ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:15.387594  111116 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (991.283µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:15.388886  111116 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (987.965µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:15.454679  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-unbound: (1.588968ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:15.554774  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-unbound: (1.66494ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:15.654688  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-unbound: (1.573552ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:15.754833  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-unbound: (1.712178ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:15.854715  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-unbound: (1.677917ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:15.954631  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-unbound: (1.599157ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.054861  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-unbound: (1.750047ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.154609  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-unbound: (1.573624ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.254827  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-unbound: (1.581412ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.355152  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-unbound: (2.017228ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.454782  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-unbound: (1.601735ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.476228  111116 scheduling_queue.go:830] About to try and schedule pod volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-unbound
I0912 17:28:16.476266  111116 scheduler.go:530] Attempting to schedule pod: volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-unbound
I0912 17:28:16.476458  111116 scheduler_binder.go:652] All bound volumes for Pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-unbound" match with Node "node-1"
I0912 17:28:16.476549  111116 scheduler_binder.go:257] AssumePodVolumes for pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-unbound", node "node-1"
I0912 17:28:16.476569  111116 scheduler_binder.go:267] AssumePodVolumes for pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-unbound", node "node-1": all PVCs bound and nothing to do
I0912 17:28:16.476647  111116 factory.go:606] Attempting to bind pod-i-unbound to node-1
I0912 17:28:16.479312  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-unbound/binding: (2.228713ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.479566  111116 scheduler.go:662] pod volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-i-unbound is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0912 17:28:16.481370  111116 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/events: (1.478164ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.554794  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-i-unbound: (1.706251ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.556555  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-controller-provisioned: (1.133293ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.561195  111116 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods: (4.214029ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.564367  111116 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims: (2.8834ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.564677  111116 pv_controller_base.go:258] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned" deleted
I0912 17:28:16.564714  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-380af228-618f-4c30-af1b-a21807b1a552" with version 59265
I0912 17:28:16.564739  111116 pv_controller.go:489] synchronizing PersistentVolume[pvc-380af228-618f-4c30-af1b-a21807b1a552]: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned (uid: 380af228-618f-4c30-af1b-a21807b1a552)", boundByController: true
I0912 17:28:16.564746  111116 pv_controller.go:514] synchronizing PersistentVolume[pvc-380af228-618f-4c30-af1b-a21807b1a552]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned
I0912 17:28:16.565760  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-controller-provisioned: (843.702µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:16.565962  111116 pv_controller.go:547] synchronizing PersistentVolume[pvc-380af228-618f-4c30-af1b-a21807b1a552]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned not found
I0912 17:28:16.565984  111116 pv_controller.go:575] volume "pvc-380af228-618f-4c30-af1b-a21807b1a552" is released and reclaim policy "Delete" will be executed
I0912 17:28:16.566004  111116 pv_controller.go:777] updating PersistentVolume[pvc-380af228-618f-4c30-af1b-a21807b1a552]: set phase Released
I0912 17:28:16.567701  111116 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-380af228-618f-4c30-af1b-a21807b1a552/status: (1.469303ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:16.568011  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-380af228-618f-4c30-af1b-a21807b1a552" with version 59273
I0912 17:28:16.568045  111116 pv_controller.go:798] volume "pvc-380af228-618f-4c30-af1b-a21807b1a552" entered phase "Released"
I0912 17:28:16.568058  111116 pv_controller.go:1022] reclaimVolume[pvc-380af228-618f-4c30-af1b-a21807b1a552]: policy is Delete
I0912 17:28:16.568077  111116 pv_controller.go:1631] scheduleOperation[delete-pvc-380af228-618f-4c30-af1b-a21807b1a552[45817223-0b53-4c87-a661-b90c6d0fb86c]]
I0912 17:28:16.568108  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-380af228-618f-4c30-af1b-a21807b1a552" with version 59273
I0912 17:28:16.568128  111116 pv_controller.go:489] synchronizing PersistentVolume[pvc-380af228-618f-4c30-af1b-a21807b1a552]: phase: Released, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned (uid: 380af228-618f-4c30-af1b-a21807b1a552)", boundByController: true
I0912 17:28:16.568146  111116 pv_controller.go:514] synchronizing PersistentVolume[pvc-380af228-618f-4c30-af1b-a21807b1a552]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned
I0912 17:28:16.568166  111116 pv_controller.go:547] synchronizing PersistentVolume[pvc-380af228-618f-4c30-af1b-a21807b1a552]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned not found
I0912 17:28:16.568173  111116 pv_controller.go:1022] reclaimVolume[pvc-380af228-618f-4c30-af1b-a21807b1a552]: policy is Delete
I0912 17:28:16.568182  111116 pv_controller.go:1631] scheduleOperation[delete-pvc-380af228-618f-4c30-af1b-a21807b1a552[45817223-0b53-4c87-a661-b90c6d0fb86c]]
I0912 17:28:16.568190  111116 pv_controller.go:1642] operation "delete-pvc-380af228-618f-4c30-af1b-a21807b1a552[45817223-0b53-4c87-a661-b90c6d0fb86c]" is already running, skipping
I0912 17:28:16.568219  111116 pv_controller.go:1146] deleteVolumeOperation [pvc-380af228-618f-4c30-af1b-a21807b1a552] started
I0912 17:28:16.568449  111116 httplog.go:90] DELETE /api/v1/persistentvolumes: (3.699207ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.568559  111116 pv_controller_base.go:212] volume "pvc-380af228-618f-4c30-af1b-a21807b1a552" deleted
I0912 17:28:16.568595  111116 pv_controller_base.go:396] deletion of claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-controller-provisioned" was already processed
I0912 17:28:16.569167  111116 httplog.go:90] GET /api/v1/persistentvolumes/pvc-380af228-618f-4c30-af1b-a21807b1a552: (790.43µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:16.569348  111116 pv_controller.go:1153] error reading persistent volume "pvc-380af228-618f-4c30-af1b-a21807b1a552": persistentvolumes "pvc-380af228-618f-4c30-af1b-a21807b1a552" not found
I0912 17:28:16.575186  111116 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (6.493734ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.575346  111116 volume_binding_test.go:751] Running test wait provisioned
I0912 17:28:16.576569  111116 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.038552ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.577977  111116 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (985.446µs) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.579154  111116 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (883.333µs) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.580507  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims: (1.032721ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.580815  111116 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision", version 59281
I0912 17:28:16.580842  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:16.580860  111116 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: no volume found
I0912 17:28:16.580881  111116 pv_controller.go:683] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] status: set phase Pending
I0912 17:28:16.580897  111116 pv_controller.go:728] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] status: phase Pending already set
I0912 17:28:16.580944  111116 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306", Name:"pvc-canprovision", UID:"dddfa707-8b2e-49fc-b5e2-a525b93a3e63", APIVersion:"v1", ResourceVersion:"59281", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0912 17:28:16.582184  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods: (1.208325ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.582275  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/events: (1.141316ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:16.582410  111116 scheduling_queue.go:830] About to try and schedule pod volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-canprovision
I0912 17:28:16.582577  111116 scheduler.go:530] Attempting to schedule pod: volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-canprovision
I0912 17:28:16.582714  111116 scheduler_binder.go:679] No matching volumes for Pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-canprovision", PVC "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" on node "node-1"
I0912 17:28:16.582734  111116 scheduler_binder.go:734] Provisioning for claims of pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-canprovision" that has no matching volumes on node "node-1" ...
I0912 17:28:16.582775  111116 scheduler_binder.go:257] AssumePodVolumes for pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-canprovision", node "node-1"
I0912 17:28:16.582856  111116 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision", version 59281
I0912 17:28:16.582963  111116 scheduler_binder.go:332] BindPodVolumes for pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-canprovision", node "node-1"
I0912 17:28:16.584531  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-canprovision: (1.280264ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:16.584776  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 59284
I0912 17:28:16.584803  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:16.584817  111116 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: no volume found
I0912 17:28:16.584824  111116 pv_controller.go:1326] provisionClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: started
I0912 17:28:16.584836  111116 pv_controller.go:1631] scheduleOperation[provision-volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision[dddfa707-8b2e-49fc-b5e2-a525b93a3e63]]
I0912 17:28:16.584868  111116 pv_controller.go:1372] provisionClaimOperation [volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] started, class: "wait-4qvk"
I0912 17:28:16.586425  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-canprovision: (1.349019ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:16.586600  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 59285
I0912 17:28:16.587178  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 59285
I0912 17:28:16.587567  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:16.587692  111116 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: no volume found
I0912 17:28:16.587789  111116 pv_controller.go:1326] provisionClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: started
I0912 17:28:16.587873  111116 pv_controller.go:1631] scheduleOperation[provision-volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision[dddfa707-8b2e-49fc-b5e2-a525b93a3e63]]
I0912 17:28:16.587982  111116 pv_controller.go:1642] operation "provision-volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision[dddfa707-8b2e-49fc-b5e2-a525b93a3e63]" is already running, skipping
I0912 17:28:16.587533  111116 httplog.go:90] GET /api/v1/persistentvolumes/pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63: (766.251µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:16.588362  111116 pv_controller.go:1476] volume "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63" for claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" created
I0912 17:28:16.588384  111116 pv_controller.go:1493] provisionClaimOperation [volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: trying to save volume pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63
I0912 17:28:16.589587  111116 httplog.go:90] POST /api/v1/persistentvolumes: (1.046372ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:16.589737  111116 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63", version 59286
I0912 17:28:16.589749  111116 pv_controller.go:1501] volume "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63" for claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" saved
I0912 17:28:16.589763  111116 pv_controller.go:489] synchronizing PersistentVolume[pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63]: phase: Pending, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: dddfa707-8b2e-49fc-b5e2-a525b93a3e63)", boundByController: true
I0912 17:28:16.589767  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63" with version 59286
I0912 17:28:16.589771  111116 pv_controller.go:514] synchronizing PersistentVolume[pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision
I0912 17:28:16.589782  111116 pv_controller.go:555] synchronizing PersistentVolume[pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:16.589785  111116 pv_controller.go:1554] volume "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63" provisioned for claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:16.589794  111116 pv_controller.go:603] synchronizing PersistentVolume[pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63]: volume not bound yet, waiting for syncClaim to fix it
I0912 17:28:16.589810  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 59285
I0912 17:28:16.589818  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:16.589835  111116 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: volume "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63" found: phase: Pending, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: dddfa707-8b2e-49fc-b5e2-a525b93a3e63)", boundByController: true
I0912 17:28:16.589844  111116 pv_controller.go:931] binding volume "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63" to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:16.589853  111116 pv_controller.go:829] updating PersistentVolume[pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63]: binding to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:16.589863  111116 pv_controller.go:841] updating PersistentVolume[pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63]: already bound to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:16.589870  111116 pv_controller.go:777] updating PersistentVolume[pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63]: set phase Bound
I0912 17:28:16.589912  111116 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306", Name:"pvc-canprovision", UID:"dddfa707-8b2e-49fc-b5e2-a525b93a3e63", APIVersion:"v1", ResourceVersion:"59285", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63 using kubernetes.io/mock-provisioner
I0912 17:28:16.591348  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/events: (1.36792ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:16.591427  111116 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63/status: (1.346178ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.591615  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63" with version 59288
I0912 17:28:16.591641  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63" with version 59288
I0912 17:28:16.591644  111116 pv_controller.go:489] synchronizing PersistentVolume[pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63]: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: dddfa707-8b2e-49fc-b5e2-a525b93a3e63)", boundByController: true
I0912 17:28:16.591658  111116 pv_controller.go:514] synchronizing PersistentVolume[pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision
I0912 17:28:16.591657  111116 pv_controller.go:798] volume "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63" entered phase "Bound"
I0912 17:28:16.591669  111116 pv_controller.go:555] synchronizing PersistentVolume[pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:16.591669  111116 pv_controller.go:869] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: binding to "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63"
I0912 17:28:16.591679  111116 pv_controller.go:603] synchronizing PersistentVolume[pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63]: volume not bound yet, waiting for syncClaim to fix it
I0912 17:28:16.591682  111116 pv_controller.go:901] volume "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63" bound to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:16.593201  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-canprovision: (1.341589ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.593375  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 59289
I0912 17:28:16.593402  111116 pv_controller.go:912] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: bound to "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63"
I0912 17:28:16.593410  111116 pv_controller.go:683] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] status: set phase Bound
I0912 17:28:16.594844  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-canprovision/status: (1.269119ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.595070  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 59290
I0912 17:28:16.595105  111116 pv_controller.go:742] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" entered phase "Bound"
I0912 17:28:16.595117  111116 pv_controller.go:957] volume "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63" bound to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:16.595133  111116 pv_controller.go:958] volume "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63" status after binding: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: dddfa707-8b2e-49fc-b5e2-a525b93a3e63)", boundByController: true
I0912 17:28:16.595146  111116 pv_controller.go:959] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63", bindCompleted: true, boundByController: true
I0912 17:28:16.595173  111116 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" with version 59290
I0912 17:28:16.595239  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: phase: Bound, bound to: "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63", bindCompleted: true, boundByController: true
I0912 17:28:16.595278  111116 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: volume "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63" found: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: dddfa707-8b2e-49fc-b5e2-a525b93a3e63)", boundByController: true
I0912 17:28:16.595314  111116 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: claim is already correctly bound
I0912 17:28:16.595341  111116 pv_controller.go:931] binding volume "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63" to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:16.595371  111116 pv_controller.go:829] updating PersistentVolume[pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63]: binding to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:16.595405  111116 pv_controller.go:841] updating PersistentVolume[pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63]: already bound to "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:16.595451  111116 pv_controller.go:777] updating PersistentVolume[pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63]: set phase Bound
I0912 17:28:16.595496  111116 pv_controller.go:780] updating PersistentVolume[pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63]: phase Bound already set
I0912 17:28:16.595528  111116 pv_controller.go:869] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: binding to "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63"
I0912 17:28:16.595564  111116 pv_controller.go:916] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision]: already bound to "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63"
I0912 17:28:16.595598  111116 pv_controller.go:683] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] status: set phase Bound
I0912 17:28:16.595625  111116 pv_controller.go:728] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision] status: phase Bound already set
I0912 17:28:16.595653  111116 pv_controller.go:957] volume "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63" bound to claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision"
I0912 17:28:16.595688  111116 pv_controller.go:958] volume "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63" status after binding: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: dddfa707-8b2e-49fc-b5e2-a525b93a3e63)", boundByController: true
I0912 17:28:16.595717  111116 pv_controller.go:959] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63", bindCompleted: true, boundByController: true
I0912 17:28:16.684541  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-canprovision: (1.636978ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.784603  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-canprovision: (1.669259ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.885010  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-canprovision: (2.010534ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:16.984617  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-canprovision: (1.692713ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:17.084411  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-canprovision: (1.574454ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:17.184660  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-canprovision: (1.807504ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:17.284593  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-canprovision: (1.733254ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:17.384458  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-canprovision: (1.574876ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:17.476577  111116 cache.go:669] Couldn't expire cache for pod volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-canprovision. Binding is still in progress.
I0912 17:28:17.484784  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-canprovision: (1.908843ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:17.585110  111116 scheduler_binder.go:546] All PVCs for pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-canprovision" are bound
I0912 17:28:17.585170  111116 factory.go:606] Attempting to bind pod-pvc-canprovision to node-1
I0912 17:28:17.585816  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-canprovision: (1.721004ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:17.587523  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-canprovision/binding: (2.100849ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.587893  111116 scheduler.go:662] pod volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-canprovision is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0912 17:28:17.590047  111116 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/events: (1.778666ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.684716  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-canprovision: (1.845533ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.686485  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-canprovision: (1.222259ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.691368  111116 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods: (4.455348ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.695075  111116 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims: (3.338853ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.695441  111116 pv_controller_base.go:258] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" deleted
I0912 17:28:17.695569  111116 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63" with version 59288
I0912 17:28:17.695656  111116 pv_controller.go:489] synchronizing PersistentVolume[pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63]: phase: Bound, bound to: "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision (uid: dddfa707-8b2e-49fc-b5e2-a525b93a3e63)", boundByController: true
I0912 17:28:17.695724  111116 pv_controller.go:514] synchronizing PersistentVolume[pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63]: volume is bound to claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision
I0912 17:28:17.696856  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-canprovision: (851.84µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:17.697107  111116 pv_controller.go:547] synchronizing PersistentVolume[pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63]: claim volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision not found
I0912 17:28:17.697132  111116 pv_controller.go:575] volume "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63" is released and reclaim policy "Delete" will be executed
I0912 17:28:17.697142  111116 pv_controller.go:777] updating PersistentVolume[pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63]: set phase Released
I0912 17:28:17.698457  111116 store.go:362] GuaranteedUpdate of /786db7ea-de2d-4c3a-a56f-63266d05494a/persistentvolumes/pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63 failed because of a conflict, going to retry
I0912 17:28:17.698589  111116 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63/status: (1.255043ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:17.698717  111116 pv_controller.go:790] updating PersistentVolume[pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63]: set phase Released failed: Operation cannot be fulfilled on persistentvolumes "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63": StorageError: invalid object, Code: 4, Key: /786db7ea-de2d-4c3a-a56f-63266d05494a/persistentvolumes/pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: ab89eec9-630a-4db7-8b42-0ec645400bd1, UID in object meta: 
I0912 17:28:17.698748  111116 httplog.go:90] DELETE /api/v1/persistentvolumes: (3.333593ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.698763  111116 pv_controller_base.go:202] could not sync volume "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63": Operation cannot be fulfilled on persistentvolumes "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63": StorageError: invalid object, Code: 4, Key: /786db7ea-de2d-4c3a-a56f-63266d05494a/persistentvolumes/pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: ab89eec9-630a-4db7-8b42-0ec645400bd1, UID in object meta: 
I0912 17:28:17.699276  111116 pv_controller_base.go:212] volume "pvc-dddfa707-8b2e-49fc-b5e2-a525b93a3e63" deleted
I0912 17:28:17.699468  111116 pv_controller_base.go:396] deletion of claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-canprovision" was already processed
I0912 17:28:17.704949  111116 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (5.799502ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.705091  111116 volume_binding_test.go:751] Running test topolgy unsatisfied
I0912 17:28:17.706458  111116 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.188368ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.710151  111116 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (3.01422ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.711911  111116 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.286423ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.713707  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims: (1.405036ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.714004  111116 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-topomismatch", version 59303
I0912 17:28:17.714032  111116 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-topomismatch]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0912 17:28:17.714055  111116 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-topomismatch]: no volume found
I0912 17:28:17.714082  111116 pv_controller.go:683] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-topomismatch] status: set phase Pending
I0912 17:28:17.714099  111116 pv_controller.go:728] updating PersistentVolumeClaim[volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-topomismatch] status: phase Pending already set
I0912 17:28:17.714226  111116 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306", Name:"pvc-topomismatch", UID:"412dbfb6-6ffd-46f1-b569-98e91d462934", APIVersion:"v1", ResourceVersion:"59303", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0912 17:28:17.715626  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods: (1.528525ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.715981  111116 httplog.go:90] POST /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/events: (1.378278ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:17.716020  111116 scheduling_queue.go:830] About to try and schedule pod volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-topomismatch
I0912 17:28:17.716033  111116 scheduler.go:530] Attempting to schedule pod: volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-topomismatch
I0912 17:28:17.716170  111116 scheduler_binder.go:679] No matching volumes for Pod "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-topomismatch", PVC "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-topomismatch" on node "node-1"
I0912 17:28:17.716207  111116 scheduler_binder.go:724] Node "node-1" cannot satisfy provisioning topology requirements of claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-topomismatch"
I0912 17:28:17.716240  111116 factory.go:541] Unable to schedule volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-topomismatch: no fit: 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.; waiting
I0912 17:28:17.716263  111116 factory.go:615] Updating pod condition for volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-topomismatch to (PodScheduled==False, Reason=Unschedulable)
I0912 17:28:17.718132  111116 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/events: (1.123087ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42392]
I0912 17:28:17.718277  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-topomismatch: (1.698682ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.718547  111116 httplog.go:90] PUT /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-topomismatch/status: (2.050778ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42390]
I0912 17:28:17.719793  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-topomismatch: (790.985µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.720033  111116 generic_scheduler.go:337] Preemption will not help schedule pod volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-topomismatch on any node.
I0912 17:28:17.818039  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods/pod-pvc-topomismatch: (1.711618ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.819709  111116 httplog.go:90] GET /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims/pvc-topomismatch: (1.200364ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.823797  111116 scheduling_queue.go:830] About to try and schedule pod volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-topomismatch
I0912 17:28:17.823847  111116 scheduler.go:526] Skip schedule deleting pod: volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-topomismatch
I0912 17:28:17.824826  111116 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods: (4.56153ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.825504  111116 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/events: (1.322167ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42392]
I0912 17:28:17.827956  111116 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims: (2.687353ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.828067  111116 pv_controller_base.go:258] claim "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-topomismatch" deleted
I0912 17:28:17.829191  111116 httplog.go:90] DELETE /api/v1/persistentvolumes: (795.918µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.835232  111116 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (5.757159ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.835335  111116 volume_binding_test.go:932] test cluster "volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306" start to tear down
I0912 17:28:17.836220  111116 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pods: (767.839µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.837295  111116 httplog.go:90] DELETE /api/v1/namespaces/volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/persistentvolumeclaims: (804.653µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.838483  111116 httplog.go:90] DELETE /api/v1/persistentvolumes: (814.661µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.839588  111116 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (771.949µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.840081  111116 pv_controller_base.go:298] Shutting down persistent volume controller
I0912 17:28:17.840169  111116 pv_controller_base.go:409] claim worker queue shutting down
I0912 17:28:17.840186  111116 pv_controller_base.go:352] volume worker queue shutting down
I0912 17:28:17.840401  111116 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=58690&timeout=8m22s&timeoutSeconds=502&watch=true: (22.2627327s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42006]
I0912 17:28:17.840401  111116 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=58690&timeout=7m25s&timeoutSeconds=445&watch=true: (22.362939376s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41970]
I0912 17:28:17.840416  111116 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=58690&timeout=9m19s&timeoutSeconds=559&watch=true: (22.263212418s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41986]
I0912 17:28:17.840421  111116 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=58692&timeout=5m35s&timeoutSeconds=335&watch=true: (22.361993607s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41964]
I0912 17:28:17.840423  111116 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=58691&timeout=5m1s&timeoutSeconds=301&watch=true: (22.363316904s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41984]
I0912 17:28:17.840453  111116 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=58690&timeout=9m9s&timeoutSeconds=549&watch=true: (22.363711972s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41980]
I0912 17:28:17.840468  111116 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=58690&timeout=6m20s&timeoutSeconds=380&watch=true: (22.361037335s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41982]
I0912 17:28:17.840469  111116 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=58690&timeout=8m23s&timeoutSeconds=503&watch=true: (22.360178984s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0912 17:28:17.840486  111116 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=58691&timeout=8m33s&timeoutSeconds=513&watch=true: (22.362802241s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41972]
I0912 17:28:17.840511  111116 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=58691&timeout=7m2s&timeoutSeconds=422&watch=true: (22.263476597s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41988]
I0912 17:28:17.840520  111116 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=58692&timeout=9m51s&timeoutSeconds=591&watch=true: (22.363139202s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41978]
I0912 17:28:17.840522  111116 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=58690&timeout=5m14s&timeoutSeconds=314&watch=true: (22.36180153s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41968]
I0912 17:28:17.840539  111116 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=58690&timeout=9m16s&timeoutSeconds=556&watch=true: (22.263404086s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41990]
I0912 17:28:17.840554  111116 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=58933&timeout=7m25s&timeoutSeconds=445&watch=true: (22.361807367s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41974]
I0912 17:28:17.840580  111116 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=58691&timeout=7m46s&timeoutSeconds=466&watch=true: (22.363013679s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41976]
I0912 17:28:17.840587  111116 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=58690&timeout=5m21s&timeoutSeconds=321&watch=true: (22.263121479s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41992]
I0912 17:28:17.844091  111116 httplog.go:90] DELETE /api/v1/nodes: (3.533834ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.844243  111116 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0912 17:28:17.845392  111116 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (893.909µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0912 17:28:17.847173  111116 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.363073ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
W0912 17:28:17.847669  111116 feature_gate.go:208] Setting GA feature gate PersistentLocalVolumes=true. It will be removed in a future release.
I0912 17:28:17.847689  111116 feature_gate.go:216] feature gates: &{map[PersistentLocalVolumes:true]}
--- FAIL: TestVolumeProvision (25.93s)
    volume_binding_test.go:1149: Provisoning annotaion on PVC volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind not bahaviors as expected: PVC volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pvc-w-canbind not expected to be provisioned, but found selected-node annotation
    volume_binding_test.go:1191: PV pv-w-canbind phase not Bound, got Available

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190912-171656.xml

Find volume-schedulingb15d1fda-8a3e-4863-ab95-c1309f968306/pod-pvc-canbind-or-provision mentions in log files


Show 2862 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 944 lines ...
W0912 17:11:45.553] I0912 17:11:45.535496   52872 node_lifecycle_controller.go:458] Controller will reconcile labels.
W0912 17:11:45.553] I0912 17:11:45.535511   52872 node_lifecycle_controller.go:471] Controller will taint node by condition.
W0912 17:11:45.554] I0912 17:11:45.535568   52872 controllermanager.go:534] Started "nodelifecycle"
W0912 17:11:45.554] I0912 17:11:45.535676   52872 node_lifecycle_controller.go:495] Starting node controller
W0912 17:11:45.554] I0912 17:11:45.535702   52872 shared_informer.go:197] Waiting for caches to sync for taint
W0912 17:11:45.555] I0912 17:11:45.536062   52872 node_lifecycle_controller.go:77] Sending events to api server
W0912 17:11:45.555] E0912 17:11:45.536380   52872 core.go:201] failed to start cloud node lifecycle controller: no cloud provider provided
W0912 17:11:45.556] W0912 17:11:45.536588   52872 controllermanager.go:526] Skipping "cloud-node-lifecycle"
W0912 17:11:45.556] I0912 17:11:45.537360   52872 controllermanager.go:534] Started "replicaset"
W0912 17:11:45.556] I0912 17:11:45.537392   52872 replica_set.go:182] Starting replicaset controller
W0912 17:11:45.557] I0912 17:11:45.537850   52872 shared_informer.go:197] Waiting for caches to sync for ReplicaSet
W0912 17:11:45.557] I0912 17:11:45.538285   52872 controllermanager.go:534] Started "statefulset"
W0912 17:11:45.557] I0912 17:11:45.538490   52872 stateful_set.go:145] Starting stateful set controller
... skipping 13 lines ...
W0912 17:11:46.035] I0912 17:11:45.944684   52872 controllermanager.go:534] Started "garbagecollector"
W0912 17:11:46.035] I0912 17:11:45.944738   52872 graph_builder.go:282] GraphBuilder running
W0912 17:11:46.035] I0912 17:11:45.954784   52872 controllermanager.go:534] Started "deployment"
W0912 17:11:46.035] I0912 17:11:45.955230   52872 deployment_controller.go:152] Starting deployment controller
W0912 17:11:46.035] I0912 17:11:45.955275   52872 shared_informer.go:197] Waiting for caches to sync for deployment
W0912 17:11:46.035] I0912 17:11:45.956234   52872 controllermanager.go:534] Started "csrcleaner"
W0912 17:11:46.036] E0912 17:11:45.957013   52872 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0912 17:11:46.036] W0912 17:11:45.957340   52872 controllermanager.go:526] Skipping "service"
W0912 17:11:46.036] I0912 17:11:45.957244   52872 cleaner.go:81] Starting CSR cleaner controller
W0912 17:11:46.036] I0912 17:11:45.958454   52872 controllermanager.go:534] Started "persistentvolume-binder"
W0912 17:11:46.036] I0912 17:11:45.958559   52872 pv_controller_base.go:282] Starting persistent volume controller
W0912 17:11:46.036] I0912 17:11:45.958638   52872 shared_informer.go:197] Waiting for caches to sync for persistent volume
W0912 17:11:46.036] W0912 17:11:45.959164   52872 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
... skipping 13 lines ...
W0912 17:11:46.039] I0912 17:11:46.006739   52872 shared_informer.go:204] Caches are synced for job 
W0912 17:11:46.039] I0912 17:11:46.009123   52872 shared_informer.go:204] Caches are synced for PV protection 
W0912 17:11:46.040] I0912 17:11:46.011858   52872 shared_informer.go:204] Caches are synced for PVC protection 
W0912 17:11:46.040] I0912 17:11:46.038213   52872 shared_informer.go:204] Caches are synced for ReplicaSet 
W0912 17:11:46.061] I0912 17:11:46.061195   52872 shared_informer.go:204] Caches are synced for endpoint 
W0912 17:11:46.066] I0912 17:11:46.066249   52872 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
W0912 17:11:46.080] E0912 17:11:46.079551   52872 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0912 17:11:46.093] E0912 17:11:46.092652   52872 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0912 17:11:46.114] W0912 17:11:46.113969   52872 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0912 17:11:46.136] I0912 17:11:46.135799   52872 shared_informer.go:204] Caches are synced for taint 
W0912 17:11:46.137] I0912 17:11:46.136282   52872 node_lifecycle_controller.go:1253] Initializing eviction metric for zone: 
W0912 17:11:46.137] I0912 17:11:46.136538   52872 node_lifecycle_controller.go:1103] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
W0912 17:11:46.138] I0912 17:11:46.136902   52872 taint_manager.go:186] Starting NoExecuteTaintManager
W0912 17:11:46.138] I0912 17:11:46.137423   52872 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"70f5a4f3-8bdf-4e4c-bd3e-0129b2aa8a42", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller
W0912 17:11:46.160] I0912 17:11:46.160388   52872 shared_informer.go:204] Caches are synced for attach detach 
... skipping 85 lines ...
I0912 17:11:49.953] +++ working dir: /go/src/k8s.io/kubernetes
I0912 17:11:49.955] +++ command: run_RESTMapper_evaluation_tests
I0912 17:11:49.966] +++ [0912 17:11:49] Creating namespace namespace-1568308309-28270
I0912 17:11:50.043] namespace/namespace-1568308309-28270 created
I0912 17:11:50.122] Context "test" modified.
I0912 17:11:50.128] +++ [0912 17:11:50] Testing RESTMapper
I0912 17:11:50.240] +++ [0912 17:11:50] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0912 17:11:50.255] +++ exit code: 0
I0912 17:11:50.393] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0912 17:11:50.396] bindings                                                                      true         Binding
I0912 17:11:50.399] componentstatuses                 cs                                          false        ComponentStatus
I0912 17:11:50.403] configmaps                        cm                                          true         ConfigMap
I0912 17:11:50.407] endpoints                         ep                                          true         Endpoints
... skipping 616 lines ...
I0912 17:12:10.806] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0912 17:12:10.899] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0912 17:12:10.967] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0912 17:12:11.052] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0912 17:12:11.196] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 17:12:11.378] (Bpod/env-test-pod created
W0912 17:12:11.479] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0912 17:12:11.479] error: setting 'all' parameter but found a non empty selector. 
W0912 17:12:11.479] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0912 17:12:11.480] I0912 17:12:10.468282   49311 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0912 17:12:11.480] error: min-available and max-unavailable cannot be both specified
I0912 17:12:11.580] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0912 17:12:11.581] Name:         env-test-pod
I0912 17:12:11.581] Namespace:    test-kubectl-describe-pod
I0912 17:12:11.581] Priority:     0
I0912 17:12:11.581] Node:         <none>
I0912 17:12:11.581] Labels:       <none>
... skipping 174 lines ...
I0912 17:12:24.443] (Bpod/valid-pod patched
I0912 17:12:24.528] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0912 17:12:24.599] (Bpod/valid-pod patched
I0912 17:12:24.692] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0912 17:12:24.833] (Bpod/valid-pod patched
I0912 17:12:24.921] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0912 17:12:25.088] (B+++ [0912 17:12:25] "kubectl patch with resourceVersion 498" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0912 17:12:25.309] pod "valid-pod" deleted
I0912 17:12:25.317] pod/valid-pod replaced
I0912 17:12:25.405] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0912 17:12:25.557] (BSuccessful
I0912 17:12:25.558] message:error: --grace-period must have --force specified
I0912 17:12:25.558] has:\-\-grace-period must have \-\-force specified
I0912 17:12:25.713] Successful
I0912 17:12:25.714] message:error: --timeout must have --force specified
I0912 17:12:25.714] has:\-\-timeout must have \-\-force specified
I0912 17:12:25.867] node/node-v1-test created
W0912 17:12:25.968] W0912 17:12:25.867197   52872 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0912 17:12:26.069] node/node-v1-test replaced
I0912 17:12:26.107] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0912 17:12:26.180] (Bnode "node-v1-test" deleted
I0912 17:12:26.279] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0912 17:12:26.524] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0912 17:12:27.483] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 55 lines ...
I0912 17:12:30.989] +++ exit code: 0
W0912 17:12:31.090] I0912 17:12:26.139097   52872 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-v1-test", UID:"1fad15c1-577b-4af7-a922-895547c0ce37", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-v1-test event: Registered Node node-v1-test in Controller
W0912 17:12:31.091] Edit cancelled, no changes made.
W0912 17:12:31.091] Edit cancelled, no changes made.
W0912 17:12:31.091] Edit cancelled, no changes made.
W0912 17:12:31.091] Edit cancelled, no changes made.
W0912 17:12:31.091] error: 'name' already has a value (valid-pod), and --overwrite is false
W0912 17:12:31.091] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0912 17:12:31.092] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
W0912 17:12:31.140] I0912 17:12:31.139454   52872 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-v1-test", UID:"1fad15c1-577b-4af7-a922-895547c0ce37", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-v1-test event: Removing Node node-v1-test from Controller
I0912 17:12:31.438] Recording: run_save_config_tests
I0912 17:12:31.438] Running command: run_save_config_tests
I0912 17:12:31.457] 
... skipping 54 lines ...
I0912 17:12:35.769] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0912 17:12:35.773] +++ working dir: /go/src/k8s.io/kubernetes
I0912 17:12:35.776] +++ command: run_kubectl_create_error_tests
I0912 17:12:35.789] +++ [0912 17:12:35] Creating namespace namespace-1568308355-24767
I0912 17:12:35.895] namespace/namespace-1568308355-24767 created
I0912 17:12:35.990] Context "test" modified.
I0912 17:12:35.998] +++ [0912 17:12:35] Testing kubectl create with error
W0912 17:12:36.099] Error: must specify one of -f and -k
W0912 17:12:36.099] 
W0912 17:12:36.099] Create a resource from a file or from stdin.
W0912 17:12:36.099] 
W0912 17:12:36.099]  JSON and YAML formats are accepted.
W0912 17:12:36.099] 
W0912 17:12:36.100] Examples:
... skipping 41 lines ...
W0912 17:12:36.107] 
W0912 17:12:36.107] Usage:
W0912 17:12:36.107]   kubectl create -f FILENAME [options]
W0912 17:12:36.107] 
W0912 17:12:36.107] Use "kubectl <command> --help" for more information about a given command.
W0912 17:12:36.107] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0912 17:12:36.267] +++ [0912 17:12:36] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0912 17:12:36.367] kubectl convert is DEPRECATED and will be removed in a future version.
W0912 17:12:36.368] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0912 17:12:36.494] +++ exit code: 0
I0912 17:12:36.527] Recording: run_kubectl_apply_tests
I0912 17:12:36.528] Running command: run_kubectl_apply_tests
I0912 17:12:36.552] 
... skipping 17 lines ...
I0912 17:12:38.578] (Bpod "test-pod" deleted
I0912 17:12:38.857] customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
W0912 17:12:39.203] I0912 17:12:39.202547   49311 client.go:361] parsed scheme: "endpoint"
W0912 17:12:39.203] I0912 17:12:39.202606   49311 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0912 17:12:39.208] I0912 17:12:39.207508   49311 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
I0912 17:12:39.308] kind.mygroup.example.com/myobj serverside-applied (server dry run)
W0912 17:12:39.409] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0912 17:12:39.510] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0912 17:12:39.511] +++ exit code: 0
I0912 17:12:39.511] Recording: run_kubectl_run_tests
I0912 17:12:39.511] Running command: run_kubectl_run_tests
I0912 17:12:39.539] 
I0912 17:12:39.543] +++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 96 lines ...
I0912 17:12:42.665] Context "test" modified.
I0912 17:12:42.673] +++ [0912 17:12:42] Testing kubectl create filter
I0912 17:12:42.776] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 17:12:43.011] (Bpod/selector-test-pod created
I0912 17:12:43.127] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0912 17:12:43.217] (BSuccessful
I0912 17:12:43.217] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0912 17:12:43.218] has:pods "selector-test-pod-dont-apply" not found
I0912 17:12:43.322] pod "selector-test-pod" deleted
I0912 17:12:43.344] +++ exit code: 0
I0912 17:12:43.380] Recording: run_kubectl_apply_deployments_tests
I0912 17:12:43.381] Running command: run_kubectl_apply_deployments_tests
I0912 17:12:43.404] 
... skipping 29 lines ...
W0912 17:12:46.150] I0912 17:12:46.052506   52872 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568308363-21308", Name:"nginx", UID:"81aaf683-60a2-489c-a772-6a17c7e16712", APIVersion:"apps/v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8484dd655 to 3
W0912 17:12:46.150] I0912 17:12:46.056464   52872 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568308363-21308", Name:"nginx-8484dd655", UID:"d166e85f-c442-40e5-a62a-5347d87841eb", APIVersion:"apps/v1", ResourceVersion:"583", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-7j4n7
W0912 17:12:46.151] I0912 17:12:46.059212   52872 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568308363-21308", Name:"nginx-8484dd655", UID:"d166e85f-c442-40e5-a62a-5347d87841eb", APIVersion:"apps/v1", ResourceVersion:"583", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-wpnm8
W0912 17:12:46.151] I0912 17:12:46.059654   52872 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568308363-21308", Name:"nginx-8484dd655", UID:"d166e85f-c442-40e5-a62a-5347d87841eb", APIVersion:"apps/v1", ResourceVersion:"583", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-s95jj
I0912 17:12:46.252] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I0912 17:12:50.443] (BSuccessful
I0912 17:12:50.444] message:Error from server (Conflict): error when applying patch:
I0912 17:12:50.445] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1568308363-21308\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0912 17:12:50.445] to:
I0912 17:12:50.445] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I0912 17:12:50.445] Name: "nginx", Namespace: "namespace-1568308363-21308"
I0912 17:12:50.447] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1568308363-21308\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-09-12T17:12:46Z" "generation":'\x01' "labels":map["name":"nginx"] "name":"nginx" "namespace":"namespace-1568308363-21308" "resourceVersion":"595" "selfLink":"/apis/apps/v1/namespaces/namespace-1568308363-21308/deployments/nginx" "uid":"81aaf683-60a2-489c-a772-6a17c7e16712"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-09-12T17:12:46Z" "lastUpdateTime":"2019-09-12T17:12:46Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-09-12T17:12:46Z" "lastUpdateTime":"2019-09-12T17:12:46Z" "message":"ReplicaSet \"nginx-8484dd655\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0912 17:12:50.447] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I0912 17:12:50.448] has:Error from server (Conflict)
W0912 17:12:50.548] I0912 17:12:50.083037   52872 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1568308352-19203
I0912 17:12:55.675] deployment.apps/nginx configured
I0912 17:12:55.759] Successful
I0912 17:12:55.759] message:        "name": "nginx2"
I0912 17:12:55.759]           "name": "nginx2"
I0912 17:12:55.759] has:"name": "nginx2"
W0912 17:12:55.860] I0912 17:12:55.677761   52872 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568308363-21308", Name:"nginx", UID:"c0bab30e-4ae8-40d1-aaad-a39310cb8113", APIVersion:"apps/v1", ResourceVersion:"621", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-668b6c7744 to 3
W0912 17:12:55.862] I0912 17:12:55.682477   52872 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568308363-21308", Name:"nginx-668b6c7744", UID:"6bf36eb8-3254-4096-b570-786b57e76bcc", APIVersion:"apps/v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-swf87
W0912 17:12:55.863] I0912 17:12:55.684491   52872 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568308363-21308", Name:"nginx-668b6c7744", UID:"6bf36eb8-3254-4096-b570-786b57e76bcc", APIVersion:"apps/v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-td5vj
W0912 17:12:55.863] I0912 17:12:55.685472   52872 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568308363-21308", Name:"nginx-668b6c7744", UID:"6bf36eb8-3254-4096-b570-786b57e76bcc", APIVersion:"apps/v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-2gtmh
W0912 17:12:59.964] E0912 17:12:59.964190   52872 replica_set.go:450] Sync "namespace-1568308363-21308/nginx-668b6c7744" failed with Operation cannot be fulfilled on replicasets.apps "nginx-668b6c7744": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1568308363-21308/nginx-668b6c7744, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 6bf36eb8-3254-4096-b570-786b57e76bcc, UID in object meta: 
W0912 17:13:00.949] I0912 17:13:00.948416   52872 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568308363-21308", Name:"nginx", UID:"c866b793-a9ca-4e67-b083-6fcbb10e4f53", APIVersion:"apps/v1", ResourceVersion:"655", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-668b6c7744 to 3
W0912 17:13:00.952] I0912 17:13:00.952153   52872 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568308363-21308", Name:"nginx-668b6c7744", UID:"31a61c5e-9b05-4c7c-b506-98124ce4353c", APIVersion:"apps/v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-kdxgx
W0912 17:13:00.955] I0912 17:13:00.955415   52872 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568308363-21308", Name:"nginx-668b6c7744", UID:"31a61c5e-9b05-4c7c-b506-98124ce4353c", APIVersion:"apps/v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-ld856
W0912 17:13:00.957] I0912 17:13:00.955660   52872 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568308363-21308", Name:"nginx-668b6c7744", UID:"31a61c5e-9b05-4c7c-b506-98124ce4353c", APIVersion:"apps/v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-fr56d
I0912 17:13:01.057] Successful
I0912 17:13:01.058] message:The Deployment "nginx" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx3"}: `selector` does not match template `labels`
... skipping 132 lines ...
I0912 17:13:03.875] +++ [0912 17:13:03] Creating namespace namespace-1568308383-32663
I0912 17:13:03.954] namespace/namespace-1568308383-32663 created
I0912 17:13:04.029] Context "test" modified.
I0912 17:13:04.035] +++ [0912 17:13:04] Testing kubectl get
I0912 17:13:04.118] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 17:13:04.196] (BSuccessful
I0912 17:13:04.196] message:Error from server (NotFound): pods "abc" not found
I0912 17:13:04.196] has:pods "abc" not found
I0912 17:13:04.276] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 17:13:04.362] (BSuccessful
I0912 17:13:04.362] message:Error from server (NotFound): pods "abc" not found
I0912 17:13:04.363] has:pods "abc" not found
I0912 17:13:04.442] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 17:13:04.518] (BSuccessful
I0912 17:13:04.518] message:{
I0912 17:13:04.519]     "apiVersion": "v1",
I0912 17:13:04.519]     "items": [],
... skipping 23 lines ...
I0912 17:13:04.829] has not:No resources found
I0912 17:13:04.899] Successful
I0912 17:13:04.899] message:NAME
I0912 17:13:04.900] has not:No resources found
I0912 17:13:04.988] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 17:13:05.080] (BSuccessful
I0912 17:13:05.081] message:error: the server doesn't have a resource type "foobar"
I0912 17:13:05.081] has not:No resources found
I0912 17:13:05.163] Successful
I0912 17:13:05.164] message:No resources found in namespace-1568308383-32663 namespace.
I0912 17:13:05.164] has:No resources found
I0912 17:13:05.243] Successful
I0912 17:13:05.244] message:
I0912 17:13:05.244] has not:No resources found
I0912 17:13:05.317] Successful
I0912 17:13:05.318] message:No resources found in namespace-1568308383-32663 namespace.
I0912 17:13:05.318] has:No resources found
I0912 17:13:05.397] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 17:13:05.478] (BSuccessful
I0912 17:13:05.478] message:Error from server (NotFound): pods "abc" not found
I0912 17:13:05.478] has:pods "abc" not found
I0912 17:13:05.480] FAIL!
I0912 17:13:05.480] message:Error from server (NotFound): pods "abc" not found
I0912 17:13:05.480] has not:List
I0912 17:13:05.480] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0912 17:13:05.585] Successful
I0912 17:13:05.585] message:I0912 17:13:05.540006   62799 loader.go:375] Config loaded from file:  /tmp/tmp.4WYMfmEl0I/.kube/config
I0912 17:13:05.586] I0912 17:13:05.541747   62799 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0912 17:13:05.586] I0912 17:13:05.561197   62799 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
... skipping 660 lines ...
I0912 17:13:11.075] Successful
I0912 17:13:11.075] message:NAME    DATA   AGE
I0912 17:13:11.076] one     0      1s
I0912 17:13:11.076] three   0      1s
I0912 17:13:11.076] two     0      1s
I0912 17:13:11.076] STATUS    REASON          MESSAGE
I0912 17:13:11.076] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0912 17:13:11.076] has not:watch is only supported on individual resources
I0912 17:13:12.158] Successful
I0912 17:13:12.158] message:STATUS    REASON          MESSAGE
I0912 17:13:12.159] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0912 17:13:12.159] has not:watch is only supported on individual resources
I0912 17:13:12.162] +++ [0912 17:13:12] Creating namespace namespace-1568308392-17523
I0912 17:13:12.230] namespace/namespace-1568308392-17523 created
I0912 17:13:12.297] Context "test" modified.
I0912 17:13:12.382] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 17:13:12.522] (Bpod/valid-pod created
... skipping 56 lines ...
I0912 17:13:12.602] }
I0912 17:13:12.680] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0912 17:13:12.892] (B<no value>Successful
I0912 17:13:12.892] message:valid-pod:
I0912 17:13:12.892] has:valid-pod:
I0912 17:13:12.966] Successful
I0912 17:13:12.966] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0912 17:13:12.966] 	template was:
I0912 17:13:12.966] 		{.missing}
I0912 17:13:12.967] 	object given to jsonpath engine was:
I0912 17:13:12.968] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-09-12T17:13:12Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1568308392-17523", "resourceVersion":"698", "selfLink":"/api/v1/namespaces/namespace-1568308392-17523/pods/valid-pod", "uid":"601a93f0-dae4-4d6e-8bcd-be3e675884bf"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0912 17:13:12.968] has:missing is not found
I0912 17:13:13.042] Successful
I0912 17:13:13.043] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0912 17:13:13.043] 	template was:
I0912 17:13:13.043] 		{{.missing}}
I0912 17:13:13.043] 	raw data was:
I0912 17:13:13.044] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-09-12T17:13:12Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1568308392-17523","resourceVersion":"698","selfLink":"/api/v1/namespaces/namespace-1568308392-17523/pods/valid-pod","uid":"601a93f0-dae4-4d6e-8bcd-be3e675884bf"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0912 17:13:13.045] 	object given to template engine was:
I0912 17:13:13.045] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-09-12T17:13:12Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1568308392-17523 resourceVersion:698 selfLink:/api/v1/namespaces/namespace-1568308392-17523/pods/valid-pod uid:601a93f0-dae4-4d6e-8bcd-be3e675884bf] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0912 17:13:13.046] has:map has no entry for key "missing"
W0912 17:13:13.146] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0912 17:13:14.118] Successful
I0912 17:13:14.119] message:NAME        READY   STATUS    RESTARTS   AGE
I0912 17:13:14.119] valid-pod   0/1     Pending   0          1s
I0912 17:13:14.119] STATUS      REASON          MESSAGE
I0912 17:13:14.119] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0912 17:13:14.119] has:STATUS
I0912 17:13:14.121] Successful
I0912 17:13:14.121] message:NAME        READY   STATUS    RESTARTS   AGE
I0912 17:13:14.121] valid-pod   0/1     Pending   0          1s
I0912 17:13:14.121] STATUS      REASON          MESSAGE
I0912 17:13:14.122] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0912 17:13:14.122] has:valid-pod
I0912 17:13:15.200] Successful
I0912 17:13:15.200] message:pod/valid-pod
I0912 17:13:15.200] has not:STATUS
I0912 17:13:15.202] Successful
I0912 17:13:15.202] message:pod/valid-pod
... skipping 72 lines ...
I0912 17:13:16.288] status:
I0912 17:13:16.288]   phase: Pending
I0912 17:13:16.288]   qosClass: Guaranteed
I0912 17:13:16.288] ---
I0912 17:13:16.288] has:name: valid-pod
I0912 17:13:16.361] Successful
I0912 17:13:16.361] message:Error from server (NotFound): pods "invalid-pod" not found
I0912 17:13:16.361] has:"invalid-pod" not found
I0912 17:13:16.432] pod "valid-pod" deleted
I0912 17:13:16.512] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 17:13:16.668] (Bpod/redis-master created
I0912 17:13:16.671] pod/valid-pod created
I0912 17:13:16.755] Successful
... skipping 31 lines ...
I0912 17:13:17.898] +++ command: run_kubectl_exec_pod_tests
I0912 17:13:17.908] +++ [0912 17:13:17] Creating namespace namespace-1568308397-10410
I0912 17:13:17.995] namespace/namespace-1568308397-10410 created
I0912 17:13:18.057] Context "test" modified.
I0912 17:13:18.062] +++ [0912 17:13:18] Testing kubectl exec POD COMMAND
I0912 17:13:18.139] Successful
I0912 17:13:18.139] message:Error from server (NotFound): pods "abc" not found
I0912 17:13:18.139] has:pods "abc" not found
W0912 17:13:18.240] I0912 17:13:17.327005   52872 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568308392-17523", Name:"test-the-deployment", UID:"7dcaf11c-e4d1-4309-9af4-b15f0ab619d4", APIVersion:"apps/v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-the-deployment-69fdbb5f7d to 3
W0912 17:13:18.241] I0912 17:13:17.330104   52872 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568308392-17523", Name:"test-the-deployment-69fdbb5f7d", UID:"6bfe2810-ddd8-4c38-9e0e-8acc615bb759", APIVersion:"apps/v1", ResourceVersion:"715", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-69fdbb5f7d-zhrxb
W0912 17:13:18.241] I0912 17:13:17.333067   52872 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568308392-17523", Name:"test-the-deployment-69fdbb5f7d", UID:"6bfe2810-ddd8-4c38-9e0e-8acc615bb759", APIVersion:"apps/v1", ResourceVersion:"715", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-69fdbb5f7d-dl79d
W0912 17:13:18.242] I0912 17:13:17.333372   52872 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568308392-17523", Name:"test-the-deployment-69fdbb5f7d", UID:"6bfe2810-ddd8-4c38-9e0e-8acc615bb759", APIVersion:"apps/v1", ResourceVersion:"715", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-69fdbb5f7d-7548s
I0912 17:13:18.342] pod/test-pod created
I0912 17:13:18.378] Successful
I0912 17:13:18.379] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0912 17:13:18.379] has not:pods "test-pod" not found
I0912 17:13:18.380] Successful
I0912 17:13:18.380] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0912 17:13:18.381] has not:pod or type/name must be specified
I0912 17:13:18.449] pod "test-pod" deleted
I0912 17:13:18.466] +++ exit code: 0
I0912 17:13:18.493] Recording: run_kubectl_exec_resource_name_tests
I0912 17:13:18.494] Running command: run_kubectl_exec_resource_name_tests
I0912 17:13:18.512] 
... skipping 2 lines ...
I0912 17:13:18.519] +++ command: run_kubectl_exec_resource_name_tests
I0912 17:13:18.527] +++ [0912 17:13:18] Creating namespace namespace-1568308398-154
I0912 17:13:18.600] namespace/namespace-1568308398-154 created
I0912 17:13:18.803] Context "test" modified.
I0912 17:13:18.809] +++ [0912 17:13:18] Testing kubectl exec TYPE/NAME COMMAND
I0912 17:13:18.938] Successful
I0912 17:13:18.938] message:error: the server doesn't have a resource type "foo"
I0912 17:13:18.938] has:error:
I0912 17:13:19.025] Successful
I0912 17:13:19.026] message:Error from server (NotFound): deployments.apps "bar" not found
I0912 17:13:19.026] has:"bar" not found
I0912 17:13:19.191] pod/test-pod created
I0912 17:13:19.355] replicaset.apps/frontend created
W0912 17:13:19.456] I0912 17:13:19.358581   52872 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568308398-154", Name:"frontend", UID:"5c743589-2ddf-4ca0-adfd-290286909fb7", APIVersion:"apps/v1", ResourceVersion:"750", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-57ngd
W0912 17:13:19.457] I0912 17:13:19.361393   52872 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568308398-154", Name:"frontend", UID:"5c743589-2ddf-4ca0-adfd-290286909fb7", APIVersion:"apps/v1", ResourceVersion:"750", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vc5r6
W0912 17:13:19.458] I0912 17:13:19.361733   52872 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568308398-154", Name:"frontend", UID:"5c743589-2ddf-4ca0-adfd-290286909fb7", APIVersion:"apps/v1", ResourceVersion:"750", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-lbcjt
I0912 17:13:19.558] configmap/test-set-env-config created
I0912 17:13:19.588] Successful
I0912 17:13:19.589] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0912 17:13:19.589] has:not implemented
I0912 17:13:19.673] Successful
I0912 17:13:19.673] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0912 17:13:19.673] has not:not found
I0912 17:13:19.675] Successful
I0912 17:13:19.675] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0912 17:13:19.675] has not:pod or type/name must be specified
I0912 17:13:19.771] Successful
I0912 17:13:19.772] message:Error from server (BadRequest): pod frontend-57ngd does not have a host assigned
I0912 17:13:19.772] has not:not found
I0912 17:13:19.773] Successful
I0912 17:13:19.774] message:Error from server (BadRequest): pod frontend-57ngd does not have a host assigned
I0912 17:13:19.774] has not:pod or type/name must be specified
I0912 17:13:19.853] pod "test-pod" deleted
I0912 17:13:19.936] replicaset.apps "frontend" deleted
I0912 17:13:20.014] configmap "test-set-env-config" deleted
I0912 17:13:20.031] +++ exit code: 0
I0912 17:13:20.060] Recording: run_create_secret_tests
I0912 17:13:20.060] Running command: run_create_secret_tests
I0912 17:13:20.080] 
I0912 17:13:20.082] +++ Running case: test-cmd.run_create_secret_tests 
I0912 17:13:20.085] +++ working dir: /go/src/k8s.io/kubernetes
I0912 17:13:20.088] +++ command: run_create_secret_tests
I0912 17:13:20.169] Successful
I0912 17:13:20.169] message:Error from server (NotFound): secrets "mysecret" not found
I0912 17:13:20.169] has:secrets "mysecret" not found
I0912 17:13:20.312] Successful
I0912 17:13:20.312] message:Error from server (NotFound): secrets "mysecret" not found
I0912 17:13:20.312] has:secrets "mysecret" not found
I0912 17:13:20.313] Successful
I0912 17:13:20.314] message:user-specified
I0912 17:13:20.314] has:user-specified
I0912 17:13:20.380] Successful
I0912 17:13:20.467] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"1127b778-13f5-4e54-9bd2-3b70577354d8","resourceVersion":"772","creationTimestamp":"2019-09-12T17:13:20Z"}}
... skipping 2 lines ...
I0912 17:13:20.630] has:uid
I0912 17:13:20.697] Successful
I0912 17:13:20.698] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"1127b778-13f5-4e54-9bd2-3b70577354d8","resourceVersion":"773","creationTimestamp":"2019-09-12T17:13:20Z"},"data":{"key1":"config1"}}
I0912 17:13:20.698] has:config1
I0912 17:13:20.765] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"1127b778-13f5-4e54-9bd2-3b70577354d8"}}
I0912 17:13:20.847] Successful
I0912 17:13:20.847] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I0912 17:13:20.848] has:configmaps "tester-update-cm" not found
I0912 17:13:20.859] +++ exit code: 0
I0912 17:13:20.888] Recording: run_kubectl_create_kustomization_directory_tests
I0912 17:13:20.889] Running command: run_kubectl_create_kustomization_directory_tests
I0912 17:13:20.911] 
I0912 17:13:20.913] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 110 lines ...
W0912 17:13:23.450] I0912 17:13:21.363255   52872 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568308398-154", Name:"test-the-deployment-69fdbb5f7d", UID:"9016ad09-ecbf-45d8-b684-db13983e5ad3", APIVersion:"apps/v1", ResourceVersion:"781", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-69fdbb5f7d-8bv4h
W0912 17:13:23.451] I0912 17:13:21.363989   52872 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568308398-154", Name:"test-the-deployment-69fdbb5f7d", UID:"9016ad09-ecbf-45d8-b684-db13983e5ad3", APIVersion:"apps/v1", ResourceVersion:"781", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-69fdbb5f7d-v8b5d
I0912 17:13:24.427] Successful
I0912 17:13:24.427] message:NAME        READY   STATUS    RESTARTS   AGE
I0912 17:13:24.428] valid-pod   0/1     Pending   0          0s
I0912 17:13:24.428] STATUS      REASON          MESSAGE
I0912 17:13:24.428] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0912 17:13:24.429] has:Timeout exceeded while reading body
I0912 17:13:24.503] Successful
I0912 17:13:24.504] message:NAME        READY   STATUS    RESTARTS   AGE
I0912 17:13:24.504] valid-pod   0/1     Pending   0          1s
I0912 17:13:24.504] has:valid-pod
I0912 17:13:24.570] Successful
I0912 17:13:24.571] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0912 17:13:24.571] has:Invalid timeout value
I0912 17:13:24.652] pod "valid-pod" deleted
I0912 17:13:24.668] +++ exit code: 0
I0912 17:13:24.697] Recording: run_crd_tests
I0912 17:13:24.697] Running command: run_crd_tests
I0912 17:13:24.715] 
... skipping 158 lines ...
I0912 17:13:29.361] foo.company.com/test patched
I0912 17:13:29.475] crd.sh:236: Successful get foos/test {{.patched}}: value1
I0912 17:13:29.566] (Bfoo.company.com/test patched
I0912 17:13:29.661] crd.sh:238: Successful get foos/test {{.patched}}: value2
I0912 17:13:29.758] (Bfoo.company.com/test patched
I0912 17:13:29.870] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I0912 17:13:30.047] (B+++ [0912 17:13:30] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0912 17:13:30.126] {
I0912 17:13:30.126]     "apiVersion": "company.com/v1",
I0912 17:13:30.127]     "kind": "Foo",
I0912 17:13:30.127]     "metadata": {
I0912 17:13:30.128]         "annotations": {
I0912 17:13:30.128]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 209 lines ...
I0912 17:14:00.351] +++ [0912 17:14:00] Testing cmd with image
I0912 17:14:00.436] Successful
I0912 17:14:00.436] message:deployment.apps/test1 created
I0912 17:14:00.436] has:deployment.apps/test1 created
I0912 17:14:00.509] deployment.apps "test1" deleted
I0912 17:14:00.577] Successful
I0912 17:14:00.577] message:error: Invalid image name "InvalidImageName": invalid reference format
I0912 17:14:00.578] has:error: Invalid image name "InvalidImageName": invalid reference format
I0912 17:14:00.588] +++ exit code: 0
I0912 17:14:00.618] +++ [0912 17:14:00] Testing recursive resources
I0912 17:14:00.622] +++ [0912 17:14:00] Creating namespace namespace-1568308440-12573
I0912 17:14:00.686] namespace/namespace-1568308440-12573 created
I0912 17:14:00.753] Context "test" modified.
W0912 17:14:00.853] Error from server (NotFound): namespaces "non-native-resources" not found
W0912 17:14:00.854] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0912 17:14:00.854] I0912 17:14:00.424788   52872 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568308440-18410", Name:"test1", UID:"e312da4d-14cb-4fce-bd0c-ba02c0c05ba7", APIVersion:"apps/v1", ResourceVersion:"928", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-6cdffdb5b8 to 1
W0912 17:14:00.855] I0912 17:14:00.429740   52872 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568308440-18410", Name:"test1-6cdffdb5b8", UID:"b2208ad1-7652-4e09-bba8-7432567f183c", APIVersion:"apps/v1", ResourceVersion:"929", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-6cdffdb5b8-cfjw6
W0912 17:14:00.871] W0912 17:14:00.871069   49311 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0912 17:14:00.873] E0912 17:14:00.872787   52872 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 17:14:00.967] W0912 17:14:00.966529   49311 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0912 17:14:00.968] E0912 17:14:00.967748   52872 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 17:14:01.057] W0912 17:14:01.056318   49311 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0912 17:14:01.059] E0912 17:14:01.058068   52872 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 17:14:01.148] W0912 17:14:01.147571   49311 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0912 17:14:01.149] E0912 17:14:01.148993   52872 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
I0912 17:14:01.250] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 17:14:01.250] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0912 17:14:01.250] (BSuccessful
I0912 17:14:01.250] message:pod/busybox0 created
I0912 17:14:01.251] pod/busybox1 created
I0912 17:14:01.251] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0912 17:14:01.251] has:error validating data: kind not set
I0912 17:14:01.251] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0912 17:14:01.380] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0912 17:14:01.382] (BSuccessful
I0912 17:14:01.382] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0912 17:14:01.382] has:Object 'Kind' is missing
I0912 17:14:01.476] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0912 17:14:01.767] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0912 17:14:01.769] (BSuccessful
I0912 17:14:01.769] message:pod/busybox0 replaced
I0912 17:14:01.769] pod/busybox1 replaced
I0912 17:14:01.769] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0912 17:14:01.769] has:error validating data: kind not set
I0912 17:14:01.854] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0912 17:14:01.944] (BSuccessful
I0912 17:14:01.945] message:Name:         busybox0
I0912 17:14:01.945] Namespace:    namespace-1568308440-12573
I0912 17:14:01.945] Priority:     0
I0912 17:14:01.945] Node:         <none>
... skipping 159 lines ...
I0912 17:14:01.966] has:Object 'Kind' is missing
I0912 17:14:02.037] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0912 17:14:02.201] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0912 17:14:02.204] (BSuccessful
I0912 17:14:02.204] message:pod/busybox0 annotated
I0912 17:14:02.204] pod/busybox1 annotated
I0912 17:14:02.204] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Ki