This job view page is being replaced by Spyglass soon. Check out the new job view.
PRjfbai: feat(apiserver): add user-agent and remote info into trace log for endpoints handlers.
ResultFAILURE
Tests 1 failed / 2898 succeeded
Started2019-10-10 18:28
Elapsed32m13s
Revision
Buildergke-prow-ssd-pool-1a225945-qlhl
Refs master:46dd075b
83237:91bddd13
pod9fbc76fc-eb8b-11e9-a2e6-062d8e473bcb
infra-commit63449e8df
pod9fbc76fc-eb8b-11e9-a2e6-062d8e473bcb
repok8s.io/kubernetes
repo-commite092f2315fcf669713f8101aed7f1d9675ee0291
repos{u'k8s.io/kubernetes': u'master:46dd075babbd90be86a4c3ccd8cd9a4bf2707e7d,83237:91bddd13485082892be8e8e471e358be317c4e9b'}

Test Failures


k8s.io/kubernetes/test/integration/volumescheduling TestVolumeBinding 1m22s

go test -v k8s.io/kubernetes/test/integration/volumescheduling -run TestVolumeBinding$
=== RUN   TestVolumeBinding
W1010 18:55:52.874947  111177 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I1010 18:55:52.875007  111177 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I1010 18:55:52.875035  111177 master.go:305] Node port range unspecified. Defaulting to 30000-32767.
I1010 18:55:52.875052  111177 master.go:261] Using reconciler: 
I1010 18:55:52.879510  111177 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.879906  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:52.880158  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:52.888835  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:52.888910  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:52.903103  111177 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I1010 18:55:52.903228  111177 reflector.go:185] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I1010 18:55:52.903237  111177 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.904400  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:52.905484  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:52.905663  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:52.907349  111177 store.go:1342] Monitoring events count at <storage-prefix>//events
I1010 18:55:52.907421  111177 reflector.go:185] Listing and watching *core.Event from storage/cacher.go:/events
I1010 18:55:52.907465  111177 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.907901  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:52.908008  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:52.908679  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:52.909514  111177 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I1010 18:55:52.909707  111177 reflector.go:185] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I1010 18:55:52.909711  111177 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.910190  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:52.910312  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:52.911114  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:52.911673  111177 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I1010 18:55:52.911851  111177 reflector.go:185] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I1010 18:55:52.912172  111177 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.912506  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:52.912566  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:52.913387  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:52.914873  111177 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I1010 18:55:52.914939  111177 reflector.go:185] Listing and watching *core.Secret from storage/cacher.go:/secrets
I1010 18:55:52.915835  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:52.917136  111177 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.917372  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:52.917405  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:52.918921  111177 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I1010 18:55:52.918980  111177 reflector.go:185] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I1010 18:55:52.919485  111177 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.921023  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:52.921087  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:52.921982  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:52.923755  111177 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I1010 18:55:52.924287  111177 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.925381  111177 reflector.go:185] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I1010 18:55:52.927219  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:52.928306  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:52.928400  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:52.931177  111177 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I1010 18:55:52.931256  111177 reflector.go:185] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I1010 18:55:52.932814  111177 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.932916  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:52.933047  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:52.933080  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:52.935885  111177 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I1010 18:55:52.936272  111177 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.936569  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:52.936651  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:52.936935  111177 reflector.go:185] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I1010 18:55:52.939637  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:52.940180  111177 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I1010 18:55:52.940298  111177 reflector.go:185] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I1010 18:55:52.940698  111177 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.941399  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:52.941587  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:52.941624  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:52.943452  111177 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I1010 18:55:52.943719  111177 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.943997  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:52.944030  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:52.944150  111177 reflector.go:185] Listing and watching *core.Node from storage/cacher.go:/minions
I1010 18:55:52.945547  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:52.946263  111177 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I1010 18:55:52.946346  111177 reflector.go:185] Listing and watching *core.Pod from storage/cacher.go:/pods
I1010 18:55:52.946928  111177 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.947236  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:52.947286  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:52.948525  111177 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I1010 18:55:52.948675  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:52.948837  111177 reflector.go:185] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I1010 18:55:52.949937  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:52.949880  111177 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.950156  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:52.950188  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:52.951415  111177 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I1010 18:55:52.951498  111177 reflector.go:185] Listing and watching *core.Service from storage/cacher.go:/services/specs
I1010 18:55:52.951494  111177 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.951647  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:52.951663  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:52.952394  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:52.952879  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:52.952907  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:52.954264  111177 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.954456  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:52.954486  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:52.956161  111177 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I1010 18:55:52.956201  111177 rest.go:115] the default service ipfamily for this cluster is: IPv4
I1010 18:55:52.956875  111177 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.957167  111177 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.958197  111177 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.958975  111177 reflector.go:185] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I1010 18:55:52.959125  111177 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.960876  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:52.961466  111177 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.964148  111177 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.965831  111177 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.966100  111177 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.966599  111177 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.967757  111177 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.969513  111177 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.970144  111177 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.972152  111177 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.972808  111177 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.973992  111177 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.976992  111177 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.979491  111177 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.980152  111177 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.980484  111177 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.980793  111177 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.981336  111177 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.981780  111177 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.982207  111177 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.984103  111177 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.984747  111177 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.987245  111177 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.988515  111177 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.989069  111177 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.989537  111177 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.990516  111177 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.991019  111177 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.991962  111177 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.993140  111177 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.994072  111177 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.995233  111177 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.995823  111177 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.996095  111177 master.go:453] Skipping disabled API group "auditregistration.k8s.io".
I1010 18:55:52.996198  111177 master.go:464] Enabling API group "authentication.k8s.io".
I1010 18:55:52.996309  111177 master.go:464] Enabling API group "authorization.k8s.io".
I1010 18:55:52.996606  111177 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.996950  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:52.997093  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:52.998505  111177 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1010 18:55:52.998772  111177 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:52.998965  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:52.998996  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:52.999015  111177 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1010 18:55:53.001154  111177 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1010 18:55:53.001411  111177 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.001496  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.001573  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.001602  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.001702  111177 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1010 18:55:53.005350  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.005549  111177 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1010 18:55:53.005624  111177 master.go:464] Enabling API group "autoscaling".
I1010 18:55:53.005823  111177 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1010 18:55:53.006243  111177 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.006524  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.006592  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.007892  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.009943  111177 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I1010 18:55:53.010037  111177 reflector.go:185] Listing and watching *batch.Job from storage/cacher.go:/jobs
I1010 18:55:53.011504  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.012637  111177 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.013100  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.013218  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.014795  111177 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I1010 18:55:53.014828  111177 master.go:464] Enabling API group "batch".
I1010 18:55:53.015093  111177 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.015228  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.015249  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.015349  111177 reflector.go:185] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I1010 18:55:53.017184  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.018489  111177 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I1010 18:55:53.018676  111177 master.go:464] Enabling API group "certificates.k8s.io".
I1010 18:55:53.018857  111177 reflector.go:185] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I1010 18:55:53.019334  111177 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.019751  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.019873  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.020400  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.022869  111177 reflector.go:185] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1010 18:55:53.022873  111177 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1010 18:55:53.024300  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.025611  111177 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.026188  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.026291  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.028511  111177 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1010 18:55:53.028561  111177 reflector.go:185] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1010 18:55:53.030115  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.028636  111177 master.go:464] Enabling API group "coordination.k8s.io".
I1010 18:55:53.032080  111177 master.go:453] Skipping disabled API group "discovery.k8s.io".
I1010 18:55:53.032537  111177 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.032951  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.033122  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.034375  111177 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1010 18:55:53.034411  111177 master.go:464] Enabling API group "extensions".
I1010 18:55:53.034704  111177 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.034937  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.034966  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.035065  111177 reflector.go:185] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1010 18:55:53.036376  111177 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I1010 18:55:53.036872  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.036873  111177 reflector.go:185] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I1010 18:55:53.037198  111177 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.038660  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.038699  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.039449  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.041072  111177 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1010 18:55:53.041109  111177 master.go:464] Enabling API group "networking.k8s.io".
I1010 18:55:53.041129  111177 reflector.go:185] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1010 18:55:53.042445  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.043194  111177 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.043478  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.043505  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.044909  111177 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I1010 18:55:53.044933  111177 master.go:464] Enabling API group "node.k8s.io".
I1010 18:55:53.044992  111177 reflector.go:185] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I1010 18:55:53.045365  111177 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.045567  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.045593  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.046841  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.048470  111177 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I1010 18:55:53.048555  111177 reflector.go:185] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I1010 18:55:53.050114  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.051171  111177 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.051429  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.051468  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.053482  111177 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I1010 18:55:53.053920  111177 master.go:464] Enabling API group "policy".
I1010 18:55:53.053553  111177 reflector.go:185] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I1010 18:55:53.055361  111177 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.055665  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.055719  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.056087  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.057510  111177 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1010 18:55:53.057593  111177 reflector.go:185] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1010 18:55:53.057974  111177 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.058483  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.058556  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.058811  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.059841  111177 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1010 18:55:53.059909  111177 reflector.go:185] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1010 18:55:53.059953  111177 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.060479  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.060522  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.061472  111177 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1010 18:55:53.061625  111177 reflector.go:185] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1010 18:55:53.061812  111177 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.062683  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.062763  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.065834  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.067058  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.072916  111177 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1010 18:55:53.073027  111177 reflector.go:185] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1010 18:55:53.073102  111177 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.073561  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.073619  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.074940  111177 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1010 18:55:53.075081  111177 reflector.go:185] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1010 18:55:53.075311  111177 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.075585  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.075630  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.076685  111177 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1010 18:55:53.076868  111177 reflector.go:185] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1010 18:55:53.076817  111177 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.077553  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.077787  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.078223  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.079577  111177 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1010 18:55:53.079698  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.079984  111177 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.079422  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.081134  111177 reflector.go:185] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1010 18:55:53.082485  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.082524  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.085152  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.085359  111177 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1010 18:55:53.085406  111177 master.go:464] Enabling API group "rbac.authorization.k8s.io".
I1010 18:55:53.085495  111177 reflector.go:185] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1010 18:55:53.086276  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.088534  111177 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.088721  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.088776  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.089688  111177 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1010 18:55:53.089810  111177 reflector.go:185] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1010 18:55:53.089965  111177 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.090848  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.090881  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.090957  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.091758  111177 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1010 18:55:53.091792  111177 master.go:464] Enabling API group "scheduling.k8s.io".
I1010 18:55:53.091953  111177 master.go:453] Skipping disabled API group "settings.k8s.io".
I1010 18:55:53.091969  111177 reflector.go:185] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1010 18:55:53.092184  111177 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.092327  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.092358  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.092653  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.093944  111177 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1010 18:55:53.094080  111177 reflector.go:185] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1010 18:55:53.094168  111177 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.094327  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.094361  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.094851  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.096262  111177 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1010 18:55:53.096361  111177 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.096438  111177 reflector.go:185] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1010 18:55:53.096558  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.096587  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.097646  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.098494  111177 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I1010 18:55:53.098604  111177 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.098878  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.098920  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.099081  111177 reflector.go:185] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I1010 18:55:53.100052  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.101366  111177 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I1010 18:55:53.101472  111177 reflector.go:185] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I1010 18:55:53.101764  111177 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.101983  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.102020  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.102381  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.103416  111177 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1010 18:55:53.103536  111177 reflector.go:185] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1010 18:55:53.103692  111177 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.103881  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.103917  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.104954  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.105051  111177 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1010 18:55:53.105077  111177 master.go:464] Enabling API group "storage.k8s.io".
I1010 18:55:53.105203  111177 reflector.go:185] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1010 18:55:53.106472  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.107072  111177 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.107429  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.107473  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.108874  111177 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I1010 18:55:53.108984  111177 reflector.go:185] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I1010 18:55:53.109776  111177 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.110043  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.110085  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.111099  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.111826  111177 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I1010 18:55:53.111888  111177 reflector.go:185] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I1010 18:55:53.112261  111177 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.112503  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.112541  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.113142  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.115155  111177 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I1010 18:55:53.115440  111177 reflector.go:185] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I1010 18:55:53.115941  111177 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.116250  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.116288  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.116377  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.118025  111177 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I1010 18:55:53.118542  111177 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.118758  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.118815  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.119401  111177 reflector.go:185] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I1010 18:55:53.120987  111177 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I1010 18:55:53.121161  111177 master.go:464] Enabling API group "apps".
I1010 18:55:53.121312  111177 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.121555  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.121669  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.121070  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.121109  111177 reflector.go:185] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I1010 18:55:53.124180  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.124360  111177 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1010 18:55:53.124596  111177 reflector.go:185] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1010 18:55:53.124877  111177 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.125502  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.125546  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.127116  111177 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1010 18:55:53.127219  111177 reflector.go:185] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1010 18:55:53.127279  111177 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.127120  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.127424  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.127444  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.129683  111177 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1010 18:55:53.130190  111177 reflector.go:185] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1010 18:55:53.130238  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.129879  111177 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.131052  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.131188  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.131898  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.132848  111177 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1010 18:55:53.132922  111177 reflector.go:185] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1010 18:55:53.132922  111177 master.go:464] Enabling API group "admissionregistration.k8s.io".
I1010 18:55:53.133065  111177 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.133435  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.133459  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.134147  111177 store.go:1342] Monitoring events count at <storage-prefix>//events
I1010 18:55:53.134253  111177 master.go:464] Enabling API group "events.k8s.io".
I1010 18:55:53.134260  111177 reflector.go:185] Listing and watching *core.Event from storage/cacher.go:/events
I1010 18:55:53.134607  111177 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.134880  111177 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.135307  111177 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.135508  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.135344  111177 watch_cache.go:451] Replace watchCache (rev: 32528) 
I1010 18:55:53.135676  111177 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.135868  111177 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.136002  111177 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.136244  111177 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.136359  111177 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.136497  111177 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.136647  111177 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.137819  111177 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.138044  111177 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.138867  111177 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.139131  111177 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.140177  111177 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.140431  111177 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.141252  111177 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.141510  111177 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.142431  111177 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.142711  111177 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1010 18:55:53.142779  111177 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I1010 18:55:53.143596  111177 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.143798  111177 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.144135  111177 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.145402  111177 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.146089  111177 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.147217  111177 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.147549  111177 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.148639  111177 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.149957  111177 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.150221  111177 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.150909  111177 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1010 18:55:53.150987  111177 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I1010 18:55:53.152064  111177 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.152374  111177 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.152964  111177 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.153607  111177 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.154447  111177 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.155188  111177 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.157826  111177 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.158644  111177 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.159156  111177 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.160078  111177 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.160761  111177 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1010 18:55:53.160859  111177 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I1010 18:55:53.161426  111177 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.162248  111177 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1010 18:55:53.162314  111177 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I1010 18:55:53.162973  111177 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.163480  111177 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.164013  111177 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.164629  111177 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.165072  111177 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.165605  111177 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.166519  111177 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1010 18:55:53.166615  111177 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I1010 18:55:53.167716  111177 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.168755  111177 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.169223  111177 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.170219  111177 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.170955  111177 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.171668  111177 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.172705  111177 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.173179  111177 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.173673  111177 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.174995  111177 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.175598  111177 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.176099  111177 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1010 18:55:53.176276  111177 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1010 18:55:53.176356  111177 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1010 18:55:53.177412  111177 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.178469  111177 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.179684  111177 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.180534  111177 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.181678  111177 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e363539f-4a5e-4e74-9ddc-5eb895b1e875", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 18:55:53.196305  111177 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 18:55:53.196355  111177 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I1010 18:55:53.196368  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:53.196431  111177 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 18:55:53.196443  111177 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 18:55:53.196450  111177 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 18:55:53.196528  111177 httplog.go:90] GET /healthz: (359.549µs) 0 [Go-http-client/1.1 127.0.0.1:34504]
I1010 18:55:53.198259  111177 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.218322ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I1010 18:55:53.205350  111177 httplog.go:90] GET /api/v1/services: (2.196312ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I1010 18:55:53.216841  111177 httplog.go:90] GET /api/v1/services: (2.924105ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I1010 18:55:53.220583  111177 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 18:55:53.220623  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:53.220646  111177 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 18:55:53.220658  111177 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 18:55:53.220670  111177 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 18:55:53.220717  111177 httplog.go:90] GET /healthz: (361.021µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I1010 18:55:53.223104  111177 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.098969ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34504]
I1010 18:55:53.224407  111177 httplog.go:90] GET /api/v1/services: (1.573666ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:53.225027  111177 httplog.go:90] GET /api/v1/services: (3.211358ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I1010 18:55:53.227378  111177 httplog.go:90] POST /api/v1/namespaces: (3.571773ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34504]
I1010 18:55:53.229228  111177 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.274737ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I1010 18:55:53.231581  111177 httplog.go:90] POST /api/v1/namespaces: (1.954165ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I1010 18:55:53.233366  111177 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.231245ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I1010 18:55:53.235502  111177 httplog.go:90] POST /api/v1/namespaces: (1.670175ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I1010 18:55:53.299620  111177 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 18:55:53.299692  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:53.299709  111177 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 18:55:53.299720  111177 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 18:55:53.299752  111177 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 18:55:53.299811  111177 httplog.go:90] GET /healthz: (462.652µs) 0 [Go-http-client/1.1 127.0.0.1:34502]
I1010 18:55:53.321929  111177 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 18:55:53.321975  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:53.321985  111177 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 18:55:53.321992  111177 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 18:55:53.321999  111177 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 18:55:53.322028  111177 httplog.go:90] GET /healthz: (315.134µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I1010 18:55:53.398082  111177 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 18:55:53.398142  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:53.398158  111177 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 18:55:53.398169  111177 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 18:55:53.398178  111177 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 18:55:53.398234  111177 httplog.go:90] GET /healthz: (529.764µs) 0 [Go-http-client/1.1 127.0.0.1:34502]
I1010 18:55:53.421827  111177 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 18:55:53.421869  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:53.421893  111177 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 18:55:53.421903  111177 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 18:55:53.421910  111177 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 18:55:53.421960  111177 httplog.go:90] GET /healthz: (350.08µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I1010 18:55:53.497796  111177 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 18:55:53.497834  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:53.497844  111177 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 18:55:53.497853  111177 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 18:55:53.497861  111177 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 18:55:53.497897  111177 httplog.go:90] GET /healthz: (293.47µs) 0 [Go-http-client/1.1 127.0.0.1:34502]
I1010 18:55:53.525960  111177 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 18:55:53.526000  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:53.526012  111177 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 18:55:53.526023  111177 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 18:55:53.526039  111177 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 18:55:53.526080  111177 httplog.go:90] GET /healthz: (365.185µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I1010 18:55:53.597855  111177 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 18:55:53.597890  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:53.597902  111177 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 18:55:53.597911  111177 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 18:55:53.597917  111177 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 18:55:53.597974  111177 httplog.go:90] GET /healthz: (360.412µs) 0 [Go-http-client/1.1 127.0.0.1:34502]
I1010 18:55:53.622053  111177 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 18:55:53.622103  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:53.622116  111177 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 18:55:53.622127  111177 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 18:55:53.622136  111177 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 18:55:53.622211  111177 httplog.go:90] GET /healthz: (404.213µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I1010 18:55:53.697896  111177 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 18:55:53.697936  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:53.697949  111177 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 18:55:53.697958  111177 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 18:55:53.697964  111177 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 18:55:53.698021  111177 httplog.go:90] GET /healthz: (398.039µs) 0 [Go-http-client/1.1 127.0.0.1:34502]
I1010 18:55:53.721863  111177 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 18:55:53.721904  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:53.721914  111177 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 18:55:53.721922  111177 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 18:55:53.721928  111177 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 18:55:53.721961  111177 httplog.go:90] GET /healthz: (308.777µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I1010 18:55:53.797969  111177 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 18:55:53.798008  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:53.798020  111177 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 18:55:53.798030  111177 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 18:55:53.798040  111177 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 18:55:53.798101  111177 httplog.go:90] GET /healthz: (421.537µs) 0 [Go-http-client/1.1 127.0.0.1:34502]
I1010 18:55:53.821850  111177 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 18:55:53.821886  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:53.821910  111177 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 18:55:53.821920  111177 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 18:55:53.821929  111177 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 18:55:53.821977  111177 httplog.go:90] GET /healthz: (357.29µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I1010 18:55:53.876400  111177 client.go:361] parsed scheme: "endpoint"
I1010 18:55:53.876506  111177 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 18:55:53.898678  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:53.898707  111177 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 18:55:53.898716  111177 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 18:55:53.898722  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 18:55:53.898868  111177 httplog.go:90] GET /healthz: (1.339366ms) 0 [Go-http-client/1.1 127.0.0.1:34502]
I1010 18:55:53.923328  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:53.923365  111177 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 18:55:53.923376  111177 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 18:55:53.923386  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 18:55:53.923439  111177 httplog.go:90] GET /healthz: (1.842576ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I1010 18:55:53.999121  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:53.999158  111177 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 18:55:53.999167  111177 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 18:55:53.999175  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 18:55:53.999229  111177 httplog.go:90] GET /healthz: (1.527729ms) 0 [Go-http-client/1.1 127.0.0.1:34502]
I1010 18:55:54.028175  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:54.028233  111177 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 18:55:54.028248  111177 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 18:55:54.028261  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 18:55:54.028923  111177 httplog.go:90] GET /healthz: (7.109915ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I1010 18:55:54.099993  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:54.100041  111177 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 18:55:54.100053  111177 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 18:55:54.100065  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 18:55:54.100338  111177 httplog.go:90] GET /healthz: (2.614199ms) 0 [Go-http-client/1.1 127.0.0.1:34502]
I1010 18:55:54.123566  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:54.124002  111177 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 18:55:54.124101  111177 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 18:55:54.124175  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 18:55:54.124410  111177 httplog.go:90] GET /healthz: (2.533748ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I1010 18:55:54.189665  111177 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.193401ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.189847  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.919156ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I1010 18:55:54.191875  111177 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.592033ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I1010 18:55:54.192160  111177 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (2.760747ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.195114  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.669135ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I1010 18:55:54.197692  111177 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (5.21546ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.199473  111177 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (6.637865ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.200087  111177 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I1010 18:55:54.200276  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (2.025386ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I1010 18:55:54.201560  111177 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.213571ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.205255  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (3.856122ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I1010 18:55:54.207028  111177 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (4.900076ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.207392  111177 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I1010 18:55:54.207501  111177 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1010 18:55:54.207400  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:54.207583  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:54.207619  111177 httplog.go:90] GET /healthz: (7.583699ms) 0 [Go-http-client/1.1 127.0.0.1:34508]
I1010 18:55:54.208197  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (2.045615ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I1010 18:55:54.209965  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.3251ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.211574  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.07086ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.213186  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.16361ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.215879  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.538202ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.217754  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.156995ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.224128  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.597933ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.224417  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1010 18:55:54.224839  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:54.224951  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:54.225796  111177 httplog.go:90] GET /healthz: (2.632879ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.226605  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.81342ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.229243  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.16239ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.229666  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1010 18:55:54.231159  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.110821ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.234039  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.250042ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.234967  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1010 18:55:54.236710  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.303968ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.239409  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.95325ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.239610  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I1010 18:55:54.241262  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.409764ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.243894  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.061325ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.244250  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I1010 18:55:54.246545  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.729746ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.249968  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.812515ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.250363  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I1010 18:55:54.251934  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.362847ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.255227  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.52705ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.255513  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I1010 18:55:54.258237  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (2.502842ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.261257  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.429014ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.261635  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1010 18:55:54.263410  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.438218ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.266943  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.939319ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.267376  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1010 18:55:54.269565  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.474315ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.272813  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.691728ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.273810  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I1010 18:55:54.275439  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.345624ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.278256  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.143187ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.278518  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I1010 18:55:54.280226  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.262573ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.287223  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.881165ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.287943  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I1010 18:55:54.291106  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (2.641032ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.294559  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.907835ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.294833  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I1010 18:55:54.297934  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (2.797478ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.301020  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:54.301055  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:54.301111  111177 httplog.go:90] GET /healthz: (3.288994ms) 0 [Go-http-client/1.1 127.0.0.1:34776]
I1010 18:55:54.302698  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.181848ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.303109  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I1010 18:55:54.304987  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.560439ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.308670  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.045349ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.309305  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I1010 18:55:54.311288  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.577337ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.314243  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.277447ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.314573  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I1010 18:55:54.316164  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.301747ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.319278  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.573385ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.319613  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I1010 18:55:54.321090  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.07355ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.322674  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:54.322916  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:54.324330  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.081526ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.324338  111177 httplog.go:90] GET /healthz: (2.798587ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.324626  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I1010 18:55:54.326222  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.331737ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.328718  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.80054ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.329106  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I1010 18:55:54.330325  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (952.382µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.333269  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.360585ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.333603  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I1010 18:55:54.334910  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.022153ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.338009  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.317285ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.338342  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I1010 18:55:54.339689  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.050075ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.343337  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.965057ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.343644  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I1010 18:55:54.346903  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.660105ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.350468  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.895774ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.350774  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I1010 18:55:54.352707  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.551463ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.356213  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.372758ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.356518  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I1010 18:55:54.360032  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (3.152067ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.363106  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.536888ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.363383  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I1010 18:55:54.365021  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.225483ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.368307  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.393104ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.369104  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I1010 18:55:54.371101  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.489462ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.373667  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.036925ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.374115  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I1010 18:55:54.375450  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.081603ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.378058  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.021053ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.378332  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1010 18:55:54.379747  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.045928ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.382562  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.246586ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.382913  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1010 18:55:54.384132  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (929.002µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.386997  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.305608ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.387316  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1010 18:55:54.389562  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.266542ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.392526  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.257275ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.392972  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1010 18:55:54.394442  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.205986ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.397766  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.822741ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.398225  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I1010 18:55:54.399842  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:54.399944  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:54.400051  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.611773ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.401610  111177 httplog.go:90] GET /healthz: (3.956092ms) 0 [Go-http-client/1.1 127.0.0.1:34508]
I1010 18:55:54.403561  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.832943ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.404095  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I1010 18:55:54.406055  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.601314ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.410061  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.60634ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.410418  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1010 18:55:54.412057  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.234168ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.415000  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.201369ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.415354  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I1010 18:55:54.416947  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.212286ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.420207  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.541813ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.420532  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1010 18:55:54.422002  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.268741ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.422567  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:54.423150  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:54.423215  111177 httplog.go:90] GET /healthz: (1.839177ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.425409  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.929954ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.426007  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1010 18:55:54.427391  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.130708ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.430337  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.316933ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.430546  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I1010 18:55:54.432631  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.287949ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.435655  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.453292ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.438055  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I1010 18:55:54.439859  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.401368ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.443254  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.811864ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.444018  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I1010 18:55:54.447290  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (2.120318ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.451073  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.092348ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.451491  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1010 18:55:54.453064  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.297439ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.456006  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.376888ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.456454  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1010 18:55:54.458081  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.416266ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.461381  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.757976ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.461848  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1010 18:55:54.463618  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.443525ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.466647  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.497084ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.467208  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I1010 18:55:54.468966  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.36498ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.472384  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.721754ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.472843  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1010 18:55:54.474801  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.658579ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.479364  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.602732ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.479824  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I1010 18:55:54.482937  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (2.71193ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.486131  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.586916ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.486642  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I1010 18:55:54.489457  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (2.436839ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.492764  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.653553ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.493121  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I1010 18:55:54.495140  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.6574ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.498284  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:54.498321  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:54.498387  111177 httplog.go:90] GET /healthz: (1.106869ms) 0 [Go-http-client/1.1 127.0.0.1:34508]
I1010 18:55:54.499794  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.733237ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.500113  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1010 18:55:54.501649  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.175266ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.504255  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.976589ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.504550  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I1010 18:55:54.505989  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.188504ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.509350  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.767258ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.509686  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I1010 18:55:54.511298  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.270082ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.513790  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.858261ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.514163  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1010 18:55:54.515553  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.143335ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.518204  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.028789ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.518655  111177 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1010 18:55:54.520247  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.271658ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.522855  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:54.522986  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:54.523123  111177 httplog.go:90] GET /healthz: (1.586625ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.531226  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.758028ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.531829  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I1010 18:55:54.548751  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (2.123629ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.570454  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.578692ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.571150  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I1010 18:55:54.590784  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (3.759553ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.599716  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:54.599880  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:54.599944  111177 httplog.go:90] GET /healthz: (2.335577ms) 0 [Go-http-client/1.1 127.0.0.1:34776]
I1010 18:55:54.609575  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.997109ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.609930  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I1010 18:55:54.624848  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:54.624911  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:54.625025  111177 httplog.go:90] GET /healthz: (3.105369ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.629414  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (2.859257ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.650603  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.915088ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.650973  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I1010 18:55:54.668978  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (2.08929ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.692050  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.205738ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.692982  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I1010 18:55:54.699047  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:54.699098  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:54.699186  111177 httplog.go:90] GET /healthz: (1.589732ms) 0 [Go-http-client/1.1 127.0.0.1:34776]
I1010 18:55:54.708327  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.82938ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.722679  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:54.722713  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:54.722864  111177 httplog.go:90] GET /healthz: (1.21701ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.729856  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.936564ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.730269  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I1010 18:55:54.768268  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (21.490055ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.774540  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.620476ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.774804  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I1010 18:55:54.788366  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.695262ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.799048  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:54.799092  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:54.799171  111177 httplog.go:90] GET /healthz: (1.571957ms) 0 [Go-http-client/1.1 127.0.0.1:34776]
I1010 18:55:54.811114  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.28012ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.815543  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I1010 18:55:54.822720  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:54.822793  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:54.822876  111177 httplog.go:90] GET /healthz: (1.3161ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.828743  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (2.274371ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.850950  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.159418ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.851437  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I1010 18:55:54.869504  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (2.844076ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.889910  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.360907ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.890254  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I1010 18:55:54.899305  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:54.899353  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:54.899417  111177 httplog.go:90] GET /healthz: (1.633628ms) 0 [Go-http-client/1.1 127.0.0.1:34776]
I1010 18:55:54.913100  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (3.53364ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.929151  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:54.929382  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:54.929758  111177 httplog.go:90] GET /healthz: (6.077045ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:54.935928  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (9.441978ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.937478  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1010 18:55:54.951596  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (2.07282ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.974303  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.989256ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:54.974717  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1010 18:55:54.994010  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (3.824281ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:55.002077  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:55.002113  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:55.002167  111177 httplog.go:90] GET /healthz: (4.083678ms) 0 [Go-http-client/1.1 127.0.0.1:34508]
I1010 18:55:55.018747  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (9.255512ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:55.019197  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1010 18:55:55.032114  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:55.032172  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:55.032254  111177 httplog.go:90] GET /healthz: (2.104849ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:55.033569  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (2.572291ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:55.050033  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.439917ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:55.050371  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1010 18:55:55.070828  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (2.072598ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:55.094181  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.232208ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:55.094951  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I1010 18:55:55.098813  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:55.098845  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:55.098898  111177 httplog.go:90] GET /healthz: (1.488331ms) 0 [Go-http-client/1.1 127.0.0.1:34776]
I1010 18:55:55.110693  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (3.903822ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:55.124774  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:55.124816  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:55.124916  111177 httplog.go:90] GET /healthz: (1.597301ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:55.136929  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (10.3594ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:55.137321  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I1010 18:55:55.149037  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (2.379119ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:55.175351  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (8.007358ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:55.175758  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1010 18:55:55.190211  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (3.340458ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:55.229516  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:55.229559  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:55.229608  111177 httplog.go:90] GET /healthz: (28.730774ms) 0 [Go-http-client/1.1 127.0.0.1:34776]
I1010 18:55:55.230027  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (23.327291ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:55.230332  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I1010 18:55:55.231919  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:55.231961  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:55.232012  111177 httplog.go:90] GET /healthz: (2.429786ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.232475  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.887125ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I1010 18:55:55.251035  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.174242ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.251474  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1010 18:55:55.268925  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (2.29486ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.292004  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.174729ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.292308  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1010 18:55:55.299503  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:55.299544  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:55.299607  111177 httplog.go:90] GET /healthz: (1.444475ms) 0 [Go-http-client/1.1 127.0.0.1:34872]
I1010 18:55:55.315165  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (8.229659ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.327875  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:55.327910  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:55.328142  111177 httplog.go:90] GET /healthz: (6.342243ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.332172  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.676714ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:55.332526  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I1010 18:55:55.348156  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.66322ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:55.377704  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.505008ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:55.378203  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I1010 18:55:55.392779  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (6.224053ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:55.400143  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:55.400182  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:55.400245  111177 httplog.go:90] GET /healthz: (1.568738ms) 0 [Go-http-client/1.1 127.0.0.1:34776]
I1010 18:55:55.415700  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (7.664684ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:55.416308  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I1010 18:55:55.427840  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:55.427885  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:55.427970  111177 httplog.go:90] GET /healthz: (4.944097ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:55.429708  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (2.956681ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.454210  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.231867ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.454640  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1010 18:55:55.473173  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (2.130252ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.494720  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.934276ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.495127  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1010 18:55:55.504050  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:55.504083  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:55.504156  111177 httplog.go:90] GET /healthz: (5.698615ms) 0 [Go-http-client/1.1 127.0.0.1:34872]
I1010 18:55:55.523855  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (7.578357ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.524098  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:55.524132  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:55.524177  111177 httplog.go:90] GET /healthz: (2.210394ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:55.530515  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.073493ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.530850  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1010 18:55:55.555247  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (5.299342ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.571585  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.313702ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.571971  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I1010 18:55:55.588629  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.961295ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.600822  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:55.600867  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:55.600931  111177 httplog.go:90] GET /healthz: (2.875782ms) 0 [Go-http-client/1.1 127.0.0.1:34872]
I1010 18:55:55.616632  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.657479ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.616967  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1010 18:55:55.622979  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:55.623012  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:55.623084  111177 httplog.go:90] GET /healthz: (1.619904ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.628971  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (2.465415ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.653445  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.240824ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.653886  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I1010 18:55:55.669443  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (2.69749ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.700635  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (13.754554ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.701764  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I1010 18:55:55.708641  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:55.708692  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:55.708776  111177 httplog.go:90] GET /healthz: (10.255618ms) 0 [Go-http-client/1.1 127.0.0.1:34776]
I1010 18:55:55.724265  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:55.724309  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:55.724362  111177 httplog.go:90] GET /healthz: (2.762974ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.725178  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (15.820365ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:55.732240  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.88993ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:55.732603  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I1010 18:55:55.749562  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (2.45694ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:55.770384  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.705242ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:55.770902  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1010 18:55:55.788781  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (2.227728ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:55.823785  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (9.354721ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.824005  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:55.824027  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:55.824060  111177 httplog.go:90] GET /healthz: (7.696918ms) 0 [Go-http-client/1.1 127.0.0.1:34776]
I1010 18:55:55.824483  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I1010 18:55:55.825256  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:55.825287  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:55.825348  111177 httplog.go:90] GET /healthz: (2.928731ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34956]
I1010 18:55:55.829089  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.183663ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.850798  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.198921ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.851325  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I1010 18:55:55.871352  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (4.30027ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.894558  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.061591ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.894893  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1010 18:55:55.899933  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:55.899969  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:55.900047  111177 httplog.go:90] GET /healthz: (1.981207ms) 0 [Go-http-client/1.1 127.0.0.1:34872]
I1010 18:55:55.919990  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (11.894622ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.929526  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.065182ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:55.930672  111177 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1010 18:55:55.935895  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:55.935934  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:55.935983  111177 httplog.go:90] GET /healthz: (14.22037ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.951676  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.748835ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.955284  111177 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.213078ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.979434  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (12.814713ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.979847  111177 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I1010 18:55:55.991550  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (3.226585ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:55.994977  111177 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.85624ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.000395  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:56.000429  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:56.000472  111177 httplog.go:90] GET /healthz: (2.922202ms) 0 [Go-http-client/1.1 127.0.0.1:34872]
I1010 18:55:56.011121  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (4.61081ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.011444  111177 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1010 18:55:56.023280  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:56.023322  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:56.023389  111177 httplog.go:90] GET /healthz: (1.771194ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.032025  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (2.525448ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.035685  111177 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.64083ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.054036  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.950928ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.054294  111177 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1010 18:55:56.069677  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (2.803924ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.084625  111177 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.673247ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.089316  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.765815ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.089655  111177 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1010 18:55:56.103578  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:56.103610  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:56.103662  111177 httplog.go:90] GET /healthz: (4.493263ms) 0 [Go-http-client/1.1 127.0.0.1:34872]
I1010 18:55:56.108307  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.653881ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.111771  111177 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.850229ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.123601  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:56.123652  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:56.123746  111177 httplog.go:90] GET /healthz: (2.010144ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.129718  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.162995ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.130096  111177 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1010 18:55:56.148876  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (2.236306ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.152479  111177 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.087571ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.170464  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.098406ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.170852  111177 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1010 18:55:56.188396  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.787912ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.192361  111177 httplog.go:90] GET /api/v1/namespaces/kube-public: (3.237726ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.202634  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:56.202720  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:56.202829  111177 httplog.go:90] GET /healthz: (5.234113ms) 0 [Go-http-client/1.1 127.0.0.1:34872]
I1010 18:55:56.209713  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (3.263248ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.210110  111177 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1010 18:55:56.223306  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:56.223343  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:56.223406  111177 httplog.go:90] GET /healthz: (1.820222ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.228516  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.803262ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.231150  111177 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.071946ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.253549  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (6.76194ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.253958  111177 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I1010 18:55:56.268625  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (2.03787ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.271506  111177 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.071442ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.290558  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.803398ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.291881  111177 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1010 18:55:56.299435  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:56.299479  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:56.299538  111177 httplog.go:90] GET /healthz: (1.997968ms) 0 [Go-http-client/1.1 127.0.0.1:34872]
I1010 18:55:56.308612  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (2.050671ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.311645  111177 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.1959ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.324607  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:56.324643  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:56.324692  111177 httplog.go:90] GET /healthz: (2.388585ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.328778  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.373825ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.329090  111177 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1010 18:55:56.348161  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.586497ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.350695  111177 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.865181ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.369433  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.780054ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.370297  111177 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1010 18:55:56.388500  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.94764ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.391567  111177 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.319002ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.400351  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:56.400398  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:56.400476  111177 httplog.go:90] GET /healthz: (2.332234ms) 0 [Go-http-client/1.1 127.0.0.1:34872]
I1010 18:55:56.412295  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (5.575975ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.412687  111177 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1010 18:55:56.428428  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:56.428468  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:56.428522  111177 httplog.go:90] GET /healthz: (1.828509ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:56.429165  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.951677ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.431455  111177 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.698ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.489850  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (14.028776ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.491324  111177 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1010 18:55:56.492999  111177 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.374543ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.496199  111177 httplog.go:90] GET /api/v1/namespaces/kube-public: (2.679557ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.500869  111177 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 18:55:56.502545  111177 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 18:55:56.501334  111177 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (3.127706ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:56.502820  111177 httplog.go:90] GET /healthz: (5.367669ms) 0 [Go-http-client/1.1 127.0.0.1:34872]
I1010 18:55:56.503356  111177 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1010 18:55:56.523319  111177 httplog.go:90] GET /healthz: (1.590517ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.525775  111177 httplog.go:90] GET /api/v1/namespaces/default: (1.891249ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.528817  111177 httplog.go:90] POST /api/v1/namespaces: (2.369534ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.531140  111177 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.654503ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.538499  111177 httplog.go:90] POST /api/v1/namespaces/default/services: (6.851418ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.541163  111177 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.19829ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.545775  111177 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (4.054321ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.601355  111177 httplog.go:90] GET /healthz: (3.166668ms) 200 [Go-http-client/1.1 127.0.0.1:34872]
W1010 18:55:56.602832  111177 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 18:55:56.602872  111177 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 18:55:56.602888  111177 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 18:55:56.602922  111177 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 18:55:56.602936  111177 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 18:55:56.602964  111177 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 18:55:56.603003  111177 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 18:55:56.603015  111177 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 18:55:56.603030  111177 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 18:55:56.603041  111177 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 18:55:56.603049  111177 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1010 18:55:56.603139  111177 factory.go:289] Creating scheduler from algorithm provider 'DefaultProvider'
I1010 18:55:56.603185  111177 factory.go:377] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I1010 18:55:56.605260  111177 reflector.go:150] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.605293  111177 reflector.go:185] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.605267  111177 reflector.go:150] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.605344  111177 reflector.go:185] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.605356  111177 reflector.go:150] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.605374  111177 reflector.go:185] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.605705  111177 reflector.go:150] Starting reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.605721  111177 reflector.go:185] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.605746  111177 reflector.go:150] Starting reflector *v1beta1.CSINode (0s) from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.605763  111177 reflector.go:185] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.607272  111177 reflector.go:150] Starting reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.607294  111177 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (1.163034ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.607297  111177 reflector.go:185] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.608380  111177 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (2.055496ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:55:56.608408  111177 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (742.71µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.608652  111177 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (1.084538ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I1010 18:55:56.609328  111177 get.go:251] Starting watch for /api/v1/services, rv=32903 labels= fields= timeout=9m16s
I1010 18:55:56.610087  111177 reflector.go:150] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.610116  111177 reflector.go:185] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.611374  111177 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (786.157µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I1010 18:55:56.611990  111177 reflector.go:150] Starting reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.612019  111177 reflector.go:185] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.612235  111177 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (1.070812ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34980]
I1010 18:55:56.612577  111177 reflector.go:150] Starting reflector *v1beta1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.612600  111177 reflector.go:185] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.613523  111177 reflector.go:150] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.613543  111177 reflector.go:185] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.614041  111177 reflector.go:150] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.614068  111177 reflector.go:185] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.614315  111177 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (803.062µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I1010 18:55:56.615216  111177 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (811.051µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.615771  111177 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (944.534µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34982]
I1010 18:55:56.618838  111177 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=32528 labels= fields= timeout=6m22s
I1010 18:55:56.621347  111177 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (4.888372ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:55:56.631354  111177 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (13.378057ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34988]
I1010 18:55:56.640828  111177 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=32528 labels= fields= timeout=5m13s
I1010 18:55:56.641448  111177 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=32528 labels= fields= timeout=7m29s
I1010 18:55:56.641896  111177 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=32528 labels= fields= timeout=9m32s
I1010 18:55:56.642072  111177 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=32528 labels= fields= timeout=7m15s
I1010 18:55:56.642377  111177 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=32528 labels= fields= timeout=9m43s
I1010 18:55:56.644002  111177 get.go:251] Starting watch for /api/v1/pods, rv=32528 labels= fields= timeout=9m52s
I1010 18:55:56.644197  111177 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=32528 labels= fields= timeout=7m2s
I1010 18:55:56.644759  111177 get.go:251] Starting watch for /api/v1/nodes, rv=32528 labels= fields= timeout=6m26s
I1010 18:55:56.651029  111177 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=32528 labels= fields= timeout=8m45s
I1010 18:55:56.712008  111177 shared_informer.go:227] caches populated
I1010 18:55:56.712267  111177 shared_informer.go:227] caches populated
I1010 18:55:56.712419  111177 shared_informer.go:227] caches populated
I1010 18:55:56.712551  111177 shared_informer.go:227] caches populated
I1010 18:55:56.712711  111177 shared_informer.go:227] caches populated
I1010 18:55:56.712848  111177 shared_informer.go:227] caches populated
I1010 18:55:56.712966  111177 shared_informer.go:227] caches populated
I1010 18:55:56.713082  111177 shared_informer.go:227] caches populated
I1010 18:55:56.713222  111177 shared_informer.go:227] caches populated
I1010 18:55:56.713356  111177 shared_informer.go:227] caches populated
I1010 18:55:56.713475  111177 shared_informer.go:227] caches populated
I1010 18:55:56.713595  111177 shared_informer.go:227] caches populated
I1010 18:55:56.714149  111177 plugins.go:630] Loaded volume plugin "kubernetes.io/mock-provisioner"
W1010 18:55:56.714357  111177 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 18:55:56.714536  111177 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 18:55:56.714719  111177 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 18:55:56.714958  111177 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 18:55:56.715223  111177 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1010 18:55:56.715866  111177 reflector.go:150] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.715899  111177 reflector.go:185] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.716321  111177 pv_controller_base.go:289] Starting persistent volume controller
I1010 18:55:56.716352  111177 shared_informer.go:197] Waiting for caches to sync for persistent volume
I1010 18:55:56.716553  111177 reflector.go:150] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.716683  111177 reflector.go:185] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.716964  111177 reflector.go:150] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.721398  111177 reflector.go:185] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.717171  111177 reflector.go:150] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.722031  111177 reflector.go:185] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.717186  111177 reflector.go:150] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.722464  111177 reflector.go:185] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I1010 18:55:56.724181  111177 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (5.677101ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35004]
I1010 18:55:56.726423  111177 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (978.009µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35008]
I1010 18:55:56.726574  111177 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (637.089µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35004]
I1010 18:55:56.727152  111177 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=32528 labels= fields= timeout=9m19s
I1010 18:55:56.728244  111177 get.go:251] Starting watch for /api/v1/nodes, rv=32528 labels= fields= timeout=7m17s
I1010 18:55:56.728659  111177 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (471.554µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35010]
I1010 18:55:56.729492  111177 get.go:251] Starting watch for /api/v1/pods, rv=32528 labels= fields= timeout=7m53s
I1010 18:55:56.730117  111177 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=32528 labels= fields= timeout=7m24s
I1010 18:55:56.731618  111177 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (4.413355ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35012]
I1010 18:55:56.733931  111177 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=32528 labels= fields= timeout=7m45s
I1010 18:55:56.815836  111177 shared_informer.go:227] caches populated
I1010 18:55:56.815898  111177 shared_informer.go:227] caches populated
I1010 18:55:56.815903  111177 shared_informer.go:227] caches populated
I1010 18:55:56.815908  111177 shared_informer.go:227] caches populated
I1010 18:55:56.815912  111177 shared_informer.go:227] caches populated
I1010 18:55:56.816945  111177 shared_informer.go:227] caches populated
I1010 18:55:56.817146  111177 shared_informer.go:204] Caches are synced for persistent volume 
I1010 18:55:56.817283  111177 pv_controller_base.go:160] controller initialized
I1010 18:55:56.817650  111177 pv_controller_base.go:426] resyncing PV controller
I1010 18:55:56.823296  111177 httplog.go:90] POST /api/v1/nodes: (6.479883ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35006]
I1010 18:55:56.824886  111177 node_tree.go:93] Added node "node-1" in group "" to NodeTree
I1010 18:55:56.828226  111177 node_tree.go:93] Added node "node-2" in group "" to NodeTree
I1010 18:55:56.828532  111177 httplog.go:90] POST /api/v1/nodes: (4.613361ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35006]
I1010 18:55:56.833607  111177 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (4.512127ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35006]
I1010 18:55:56.836714  111177 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.273296ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35006]
I1010 18:55:56.837418  111177 volume_binding_test.go:191] Running test wait can bind
I1010 18:55:56.841313  111177 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (3.388252ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35006]
I1010 18:55:56.846908  111177 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (4.380008ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35006]
I1010 18:55:56.854224  111177 httplog.go:90] POST /api/v1/persistentvolumes: (6.584053ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35006]
I1010 18:55:56.854867  111177 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind", version 32972
I1010 18:55:56.855158  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind]: phase: Pending, bound to: "", boundByController: false
I1010 18:55:56.855276  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I1010 18:55:56.855377  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind]: set phase Available
I1010 18:55:56.860369  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (3.78053ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:56.860697  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 32973
I1010 18:55:56.861081  111177 pv_controller.go:800] volume "pv-w-canbind" entered phase "Available"
I1010 18:55:56.861201  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 32973
I1010 18:55:56.861264  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind]: phase: Available, bound to: "", boundByController: false
I1010 18:55:56.861289  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I1010 18:55:56.861296  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind]: set phase Available
I1010 18:55:56.861332  111177 pv_controller.go:782] updating PersistentVolume[pv-w-canbind]: phase Available already set
I1010 18:55:56.863334  111177 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind", version 32974
I1010 18:55:56.863375  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:55:56.863444  111177 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind]: no volume found
I1010 18:55:56.863485  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind] status: set phase Pending
I1010 18:55:56.863512  111177 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind] status: phase Pending already set
I1010 18:55:56.863550  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (7.839581ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35006]
I1010 18:55:56.863659  111177 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c", Name:"pvc-w-canbind", UID:"b0843c86-b083-4e01-8bcf-5a0a21bfa908", APIVersion:"v1", ResourceVersion:"32974", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1010 18:55:56.867824  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (3.769878ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35006]
I1010 18:55:56.875276  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods: (11.205899ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:56.876067  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind
I1010 18:55:56.876209  111177 scheduler.go:598] Attempting to schedule pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind
I1010 18:55:56.876703  111177 scheduler_binder.go:699] Found matching volumes for pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind" on node "node-1"
I1010 18:55:56.877066  111177 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind", PVC "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind" on node "node-2"
I1010 18:55:56.877201  111177 scheduler_binder.go:725] storage class "wait-d7wk" of claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind" does not support dynamic provisioning
I1010 18:55:56.877401  111177 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind", node "node-1"
I1010 18:55:56.877535  111177 scheduler_assume_cache.go:323] Assumed v1.PersistentVolume "pv-w-canbind", version 32973
I1010 18:55:56.877687  111177 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind", node "node-1"
I1010 18:55:56.877855  111177 scheduler_binder.go:404] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind" bound to volume "pv-w-canbind"
I1010 18:55:56.887225  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 32980
I1010 18:55:56.887407  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind]: phase: Available, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind (uid: b0843c86-b083-4e01-8bcf-5a0a21bfa908)", boundByController: true
I1010 18:55:56.887540  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind
I1010 18:55:56.887568  111177 pv_controller.go:557] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:55:56.887622  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind: (8.747658ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:56.887584  111177 pv_controller.go:605] synchronizing PersistentVolume[pv-w-canbind]: volume not bound yet, waiting for syncClaim to fix it
I1010 18:55:56.887849  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind" with version 32974
I1010 18:55:56.887900  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:55:56.887981  111177 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind]: volume "pv-w-canbind" found: phase: Available, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind (uid: b0843c86-b083-4e01-8bcf-5a0a21bfa908)", boundByController: true
I1010 18:55:56.888004  111177 pv_controller.go:933] binding volume "pv-w-canbind" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind"
I1010 18:55:56.888013  111177 scheduler_binder.go:410] updating PersistentVolume[pv-w-canbind]: bound to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind"
I1010 18:55:56.888018  111177 pv_controller.go:831] updating PersistentVolume[pv-w-canbind]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind"
I1010 18:55:56.888114  111177 pv_controller.go:843] updating PersistentVolume[pv-w-canbind]: already bound to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind"
I1010 18:55:56.888131  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind]: set phase Bound
I1010 18:55:56.891817  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (3.031248ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:56.892117  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 32984
I1010 18:55:56.892165  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind]: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind (uid: b0843c86-b083-4e01-8bcf-5a0a21bfa908)", boundByController: true
I1010 18:55:56.892186  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind
I1010 18:55:56.892205  111177 pv_controller.go:557] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:55:56.892221  111177 pv_controller.go:605] synchronizing PersistentVolume[pv-w-canbind]: volume not bound yet, waiting for syncClaim to fix it
I1010 18:55:56.892506  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 32984
I1010 18:55:56.892619  111177 pv_controller.go:800] volume "pv-w-canbind" entered phase "Bound"
I1010 18:55:56.892737  111177 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind]: binding to "pv-w-canbind"
I1010 18:55:56.892828  111177 pv_controller.go:903] volume "pv-w-canbind" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind"
I1010 18:55:56.897106  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-w-canbind: (3.883398ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:56.897893  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind" with version 32985
I1010 18:55:56.897948  111177 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind]: bound to "pv-w-canbind"
I1010 18:55:56.897962  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind] status: set phase Bound
I1010 18:55:56.901685  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-w-canbind/status: (3.25355ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:56.902144  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind" with version 32986
I1010 18:55:56.902264  111177 pv_controller.go:744] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind" entered phase "Bound"
I1010 18:55:56.902346  111177 pv_controller.go:959] volume "pv-w-canbind" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind"
I1010 18:55:56.902443  111177 pv_controller.go:960] volume "pv-w-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind (uid: b0843c86-b083-4e01-8bcf-5a0a21bfa908)", boundByController: true
I1010 18:55:56.902504  111177 pv_controller.go:961] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind" status after binding: phase: Bound, bound to: "pv-w-canbind", bindCompleted: true, boundByController: true
I1010 18:55:56.902589  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind" with version 32986
I1010 18:55:56.902693  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind]: phase: Bound, bound to: "pv-w-canbind", bindCompleted: true, boundByController: true
I1010 18:55:56.902870  111177 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind]: volume "pv-w-canbind" found: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind (uid: b0843c86-b083-4e01-8bcf-5a0a21bfa908)", boundByController: true
I1010 18:55:56.902959  111177 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind]: claim is already correctly bound
I1010 18:55:56.903057  111177 pv_controller.go:933] binding volume "pv-w-canbind" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind"
I1010 18:55:56.903129  111177 pv_controller.go:831] updating PersistentVolume[pv-w-canbind]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind"
I1010 18:55:56.903220  111177 pv_controller.go:843] updating PersistentVolume[pv-w-canbind]: already bound to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind"
I1010 18:55:56.903317  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind]: set phase Bound
I1010 18:55:56.903401  111177 pv_controller.go:782] updating PersistentVolume[pv-w-canbind]: phase Bound already set
I1010 18:55:56.903416  111177 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind]: binding to "pv-w-canbind"
I1010 18:55:56.903444  111177 pv_controller.go:918] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind]: already bound to "pv-w-canbind"
I1010 18:55:56.903454  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind] status: set phase Bound
I1010 18:55:56.903491  111177 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind] status: phase Bound already set
I1010 18:55:56.903503  111177 pv_controller.go:959] volume "pv-w-canbind" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind"
I1010 18:55:56.903524  111177 pv_controller.go:960] volume "pv-w-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind (uid: b0843c86-b083-4e01-8bcf-5a0a21bfa908)", boundByController: true
I1010 18:55:56.903539  111177 pv_controller.go:961] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind" status after binding: phase: Bound, bound to: "pv-w-canbind", bindCompleted: true, boundByController: true
I1010 18:55:56.993070  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind: (6.158384ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:57.079174  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind: (2.694016ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:57.179661  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind: (3.034817ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:57.285934  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind: (9.390117ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:57.379420  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind: (2.761024ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:57.490590  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind: (14.183913ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:57.578938  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind: (2.439665ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:57.603350  111177 cache.go:669] Couldn't expire cache for pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind. Binding is still in progress.
I1010 18:55:57.679501  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind: (2.540365ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:57.779526  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind: (2.797325ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:57.881437  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind: (4.717663ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:57.888428  111177 scheduler_binder.go:553] All PVCs for pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind" are bound
I1010 18:55:57.888552  111177 factory.go:710] Attempting to bind pod-w-canbind to node-1
I1010 18:55:57.894629  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind/binding: (5.345021ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:57.895346  111177 scheduler.go:730] pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I1010 18:55:57.902170  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (5.03568ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:57.979358  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind: (2.856668ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:57.983194  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-w-canbind: (3.039756ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:57.986576  111177 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind: (2.266498ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:58.001154  111177 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods: (13.651678ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:58.008568  111177 pv_controller_base.go:265] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind" deleted
I1010 18:55:58.008637  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 32984
I1010 18:55:58.008708  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind]: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind (uid: b0843c86-b083-4e01-8bcf-5a0a21bfa908)", boundByController: true
I1010 18:55:58.008718  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind
I1010 18:55:58.009171  111177 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (7.072692ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:58.012461  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-w-canbind: (3.397832ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35006]
I1010 18:55:58.013146  111177 pv_controller.go:549] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind not found
I1010 18:55:58.013187  111177 pv_controller.go:577] volume "pv-w-canbind" is released and reclaim policy "Retain" will be executed
I1010 18:55:58.013206  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind]: set phase Released
I1010 18:55:58.030849  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (16.930118ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35006]
I1010 18:55:58.031211  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 33083
I1010 18:55:58.031259  111177 pv_controller.go:800] volume "pv-w-canbind" entered phase "Released"
I1010 18:55:58.031273  111177 pv_controller.go:1013] reclaimVolume[pv-w-canbind]: policy is Retain, nothing to do
I1010 18:55:58.031770  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 33083
I1010 18:55:58.031836  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind]: phase: Released, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind (uid: b0843c86-b083-4e01-8bcf-5a0a21bfa908)", boundByController: true
I1010 18:55:58.031851  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind
I1010 18:55:58.031880  111177 pv_controller.go:549] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind not found
I1010 18:55:58.031890  111177 pv_controller.go:1013] reclaimVolume[pv-w-canbind]: policy is Retain, nothing to do
I1010 18:55:58.033621  111177 httplog.go:90] DELETE /api/v1/persistentvolumes: (23.01567ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:58.035427  111177 pv_controller_base.go:216] volume "pv-w-canbind" deleted
I1010 18:55:58.035494  111177 pv_controller_base.go:403] deletion of claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind" was already processed
I1010 18:55:58.071220  111177 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (35.159551ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:58.071573  111177 volume_binding_test.go:191] Running test wait pvc prebound
I1010 18:55:58.073402  111177 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.60763ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:58.076248  111177 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.392366ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:58.079304  111177 httplog.go:90] POST /api/v1/persistentvolumes: (2.426306ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:58.080302  111177 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-pvc-prebound", version 33092
I1010 18:55:58.080339  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Pending, bound to: "", boundByController: false
I1010 18:55:58.080358  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is unused
I1010 18:55:58.080366  111177 pv_controller.go:779] updating PersistentVolume[pv-w-pvc-prebound]: set phase Available
I1010 18:55:58.083073  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (2.485371ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35006]
I1010 18:55:58.083414  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 33093
I1010 18:55:58.083453  111177 pv_controller.go:800] volume "pv-w-pvc-prebound" entered phase "Available"
I1010 18:55:58.083484  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 33093
I1010 18:55:58.083506  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I1010 18:55:58.083526  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is unused
I1010 18:55:58.083534  111177 pv_controller.go:779] updating PersistentVolume[pv-w-pvc-prebound]: set phase Available
I1010 18:55:58.083542  111177 pv_controller.go:782] updating PersistentVolume[pv-w-pvc-prebound]: phase Available already set
I1010 18:55:58.086613  111177 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound", version 33095
I1010 18:55:58.086648  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound]: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I1010 18:55:58.086665  111177 pv_controller.go:349] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested
I1010 18:55:58.086685  111177 pv_controller.go:368] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested and found: phase: Available, bound to: "", boundByController: false
I1010 18:55:58.086704  111177 pv_controller.go:372] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound]: volume is unbound, binding
I1010 18:55:58.086776  111177 pv_controller.go:933] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound"
I1010 18:55:58.086791  111177 pv_controller.go:831] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound"
I1010 18:55:58.086835  111177 pv_controller.go:851] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound" bound to volume "pv-w-pvc-prebound"
I1010 18:55:58.087849  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (7.628078ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:58.090856  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound: (3.161556ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35006]
I1010 18:55:58.091336  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 33096
I1010 18:55:58.091561  111177 pv_controller.go:864] updating PersistentVolume[pv-w-pvc-prebound]: bound to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound"
I1010 18:55:58.091774  111177 pv_controller.go:779] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1010 18:55:58.092040  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods: (3.236926ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:58.093297  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pvc-prebound
I1010 18:55:58.093449  111177 scheduler.go:598] Attempting to schedule pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pvc-prebound
E1010 18:55:58.094054  111177 factory.go:661] Error scheduling volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pvc-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I1010 18:55:58.094109  111177 scheduler.go:746] Updating pod condition for volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
I1010 18:55:58.097505  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 33096
I1010 18:55:58.097602  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Available, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound (uid: dd489597-17c5-4f2b-9c32-8818eb8f7b3c)", boundByController: true
I1010 18:55:58.097638  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound
I1010 18:55:58.097687  111177 pv_controller.go:557] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound found: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I1010 18:55:58.097707  111177 pv_controller.go:621] synchronizing PersistentVolume[pv-w-pvc-prebound]: all is bound
I1010 18:55:58.097720  111177 pv_controller.go:779] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1010 18:55:58.098489  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (6.135659ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35006]
I1010 18:55:58.099687  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 33099
I1010 18:55:58.099717  111177 pv_controller.go:800] volume "pv-w-pvc-prebound" entered phase "Bound"
I1010 18:55:58.099751  111177 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound]: binding to "pv-w-pvc-prebound"
I1010 18:55:58.099771  111177 pv_controller.go:903] volume "pv-w-pvc-prebound" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound"
I1010 18:55:58.104199  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-w-prebound: (4.116026ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35006]
I1010 18:55:58.105497  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (6.760162ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35118]
I1010 18:55:58.105656  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound" with version 33101
I1010 18:55:58.105699  111177 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound]: bound to "pv-w-pvc-prebound"
I1010 18:55:58.105713  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound] status: set phase Bound
I1010 18:55:58.105872  111177 pv_controller.go:792] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound failed: Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 18:55:58.105918  111177 pv_controller_base.go:204] could not sync volume "pv-w-pvc-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 18:55:58.105969  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 33099
I1010 18:55:58.106015  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound (uid: dd489597-17c5-4f2b-9c32-8818eb8f7b3c)", boundByController: true
I1010 18:55:58.106033  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound
I1010 18:55:58.106054  111177 pv_controller.go:557] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound found: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: true, boundByController: false
I1010 18:55:58.106066  111177 pv_controller.go:621] synchronizing PersistentVolume[pv-w-pvc-prebound]: all is bound
I1010 18:55:58.106077  111177 pv_controller.go:779] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1010 18:55:58.106087  111177 pv_controller.go:782] updating PersistentVolume[pv-w-pvc-prebound]: phase Bound already set
I1010 18:55:58.106835  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (11.618022ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35114]
I1010 18:55:58.107785  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-pvc-prebound/status: (13.060563ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35020]
I1010 18:55:58.107902  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-pvc-prebound: (11.440611ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35116]
E1010 18:55:58.108525  111177 factory.go:685] pod is already present in the activeQ
E1010 18:55:58.108717  111177 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I1010 18:55:58.109132  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pvc-prebound
I1010 18:55:58.109243  111177 scheduler.go:598] Attempting to schedule pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pvc-prebound
I1010 18:55:58.109546  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-w-prebound/status: (3.506923ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35118]
I1010 18:55:58.109616  111177 scheduler_binder.go:653] PersistentVolume "pv-w-pvc-prebound", Node "node-2" mismatch for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pvc-prebound": No matching NodeSelectorTerms
I1010 18:55:58.109806  111177 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pvc-prebound" match with Node "node-1"
I1010 18:55:58.110016  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound" with version 33102
I1010 18:55:58.110051  111177 pv_controller.go:744] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound" entered phase "Bound"
I1010 18:55:58.110072  111177 pv_controller.go:959] volume "pv-w-pvc-prebound" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound"
I1010 18:55:58.110101  111177 pv_controller.go:960] volume "pv-w-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound (uid: dd489597-17c5-4f2b-9c32-8818eb8f7b3c)", boundByController: true
I1010 18:55:58.110121  111177 pv_controller.go:961] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound" status after binding: phase: Bound, bound to: "pv-w-pvc-prebound", bindCompleted: true, boundByController: false
I1010 18:55:58.110162  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound" with version 33102
I1010 18:55:58.110177  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound]: phase: Bound, bound to: "pv-w-pvc-prebound", bindCompleted: true, boundByController: false
I1010 18:55:58.110202  111177 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound]: volume "pv-w-pvc-prebound" found: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound (uid: dd489597-17c5-4f2b-9c32-8818eb8f7b3c)", boundByController: true
I1010 18:55:58.110213  111177 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound]: claim is already correctly bound
I1010 18:55:58.110232  111177 pv_controller.go:933] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound"
I1010 18:55:58.110245  111177 pv_controller.go:831] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound"
I1010 18:55:58.110380  111177 pv_controller.go:843] updating PersistentVolume[pv-w-pvc-prebound]: already bound to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound"
I1010 18:55:58.110401  111177 pv_controller.go:779] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1010 18:55:58.110412  111177 pv_controller.go:782] updating PersistentVolume[pv-w-pvc-prebound]: phase Bound already set
I1010 18:55:58.110423  111177 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound]: binding to "pv-w-pvc-prebound"
I1010 18:55:58.110483  111177 pv_controller.go:918] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound]: already bound to "pv-w-pvc-prebound"
I1010 18:55:58.110501  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound] status: set phase Bound
I1010 18:55:58.110637  111177 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound] status: phase Bound already set
I1010 18:55:58.110651  111177 pv_controller.go:959] volume "pv-w-pvc-prebound" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound"
I1010 18:55:58.110676  111177 pv_controller.go:960] volume "pv-w-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound (uid: dd489597-17c5-4f2b-9c32-8818eb8f7b3c)", boundByController: true
I1010 18:55:58.110777  111177 pv_controller.go:961] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound" status after binding: phase: Bound, bound to: "pv-w-pvc-prebound", bindCompleted: true, boundByController: false
I1010 18:55:58.110026  111177 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pvc-prebound", node "node-1"
I1010 18:55:58.110929  111177 scheduler_binder.go:267] AssumePodVolumes for pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pvc-prebound", node "node-1": all PVCs bound and nothing to do
I1010 18:55:58.111291  111177 factory.go:710] Attempting to bind pod-w-pvc-prebound to node-1
I1010 18:55:58.115419  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-pvc-prebound/binding: (3.685012ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35114]
I1010 18:55:58.117437  111177 scheduler.go:730] pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pvc-prebound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I1010 18:55:58.120707  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (2.786993ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35114]
I1010 18:55:58.208005  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-pvc-prebound: (5.192051ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35114]
I1010 18:55:58.211166  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-w-prebound: (2.356226ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35114]
I1010 18:55:58.214330  111177 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-pvc-prebound: (2.621254ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35114]
I1010 18:55:58.225559  111177 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods: (10.03441ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35114]
I1010 18:55:58.233351  111177 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (6.702372ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35114]
I1010 18:55:58.234113  111177 pv_controller_base.go:265] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound" deleted
I1010 18:55:58.234182  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 33099
I1010 18:55:58.234223  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound (uid: dd489597-17c5-4f2b-9c32-8818eb8f7b3c)", boundByController: true
I1010 18:55:58.234240  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound
I1010 18:55:58.236190  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-w-prebound: (1.602957ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35006]
I1010 18:55:58.236692  111177 pv_controller.go:549] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound not found
I1010 18:55:58.236756  111177 pv_controller.go:577] volume "pv-w-pvc-prebound" is released and reclaim policy "Retain" will be executed
I1010 18:55:58.236772  111177 pv_controller.go:779] updating PersistentVolume[pv-w-pvc-prebound]: set phase Released
I1010 18:55:58.239018  111177 httplog.go:90] DELETE /api/v1/persistentvolumes: (4.121296ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35114]
I1010 18:55:58.239290  111177 store.go:365] GuaranteedUpdate of /e363539f-4a5e-4e74-9ddc-5eb895b1e875/persistentvolumes/pv-w-pvc-prebound failed because of a conflict, going to retry
I1010 18:55:58.239478  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (2.269382ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35006]
I1010 18:55:58.240068  111177 pv_controller.go:792] updating PersistentVolume[pv-w-pvc-prebound]: set phase Released failed: Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": StorageError: invalid object, Code: 4, Key: /e363539f-4a5e-4e74-9ddc-5eb895b1e875/persistentvolumes/pv-w-pvc-prebound, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: ac729387-543d-492f-861b-46066ab8fc6b, UID in object meta: 
I1010 18:55:58.240107  111177 pv_controller_base.go:204] could not sync volume "pv-w-pvc-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": StorageError: invalid object, Code: 4, Key: /e363539f-4a5e-4e74-9ddc-5eb895b1e875/persistentvolumes/pv-w-pvc-prebound, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: ac729387-543d-492f-861b-46066ab8fc6b, UID in object meta: 
I1010 18:55:58.240360  111177 pv_controller_base.go:216] volume "pv-w-pvc-prebound" deleted
I1010 18:55:58.240497  111177 pv_controller_base.go:403] deletion of claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-prebound" was already processed
I1010 18:55:58.248305  111177 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (8.813899ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35114]
I1010 18:55:58.248847  111177 volume_binding_test.go:191] Running test wait cannot bind two
I1010 18:55:58.252450  111177 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (3.24943ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35114]
I1010 18:55:58.255481  111177 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.103964ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35114]
I1010 18:55:58.259469  111177 httplog.go:90] POST /api/v1/persistentvolumes: (3.336973ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35114]
I1010 18:55:58.259992  111177 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-cannotbind-1", version 33117
I1010 18:55:58.260037  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-cannotbind-1]: phase: Pending, bound to: "", boundByController: false
I1010 18:55:58.260062  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-w-cannotbind-1]: volume is unused
I1010 18:55:58.260071  111177 pv_controller.go:779] updating PersistentVolume[pv-w-cannotbind-1]: set phase Available
I1010 18:55:58.262840  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-cannotbind-1/status: (2.452244ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35006]
I1010 18:55:58.263082  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-cannotbind-1" with version 33118
I1010 18:55:58.263121  111177 pv_controller.go:800] volume "pv-w-cannotbind-1" entered phase "Available"
I1010 18:55:58.263398  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-cannotbind-1" with version 33118
I1010 18:55:58.263434  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-cannotbind-1]: phase: Available, bound to: "", boundByController: false
I1010 18:55:58.263459  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-w-cannotbind-1]: volume is unused
I1010 18:55:58.263468  111177 pv_controller.go:779] updating PersistentVolume[pv-w-cannotbind-1]: set phase Available
I1010 18:55:58.263478  111177 pv_controller.go:782] updating PersistentVolume[pv-w-cannotbind-1]: phase Available already set
I1010 18:55:58.263779  111177 httplog.go:90] POST /api/v1/persistentvolumes: (2.954938ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35114]
I1010 18:55:58.264194  111177 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-cannotbind-2", version 33119
I1010 18:55:58.264220  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-cannotbind-2]: phase: Pending, bound to: "", boundByController: false
I1010 18:55:58.264241  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-w-cannotbind-2]: volume is unused
I1010 18:55:58.264249  111177 pv_controller.go:779] updating PersistentVolume[pv-w-cannotbind-2]: set phase Available
I1010 18:55:58.266684  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (2.501256ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35114]
I1010 18:55:58.268004  111177 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind-1", version 33120
I1010 18:55:58.268040  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind-1]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:55:58.268137  111177 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind-1]: no volume found
I1010 18:55:58.268201  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind-1] status: set phase Pending
I1010 18:55:58.268223  111177 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind-1] status: phase Pending already set
I1010 18:55:58.268253  111177 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c", Name:"pvc-w-cannotbind-1", UID:"32fa7fce-6aaf-4cd1-b611-8af0fac35df1", APIVersion:"v1", ResourceVersion:"33120", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1010 18:55:58.268621  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-cannotbind-2/status: (4.081649ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35006]
I1010 18:55:58.269073  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-cannotbind-2" with version 33121
I1010 18:55:58.269280  111177 pv_controller.go:800] volume "pv-w-cannotbind-2" entered phase "Available"
I1010 18:55:58.270348  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-cannotbind-2" with version 33121
I1010 18:55:58.270389  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-cannotbind-2]: phase: Available, bound to: "", boundByController: false
I1010 18:55:58.270406  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-w-cannotbind-2]: volume is unused
I1010 18:55:58.270414  111177 pv_controller.go:779] updating PersistentVolume[pv-w-cannotbind-2]: set phase Available
I1010 18:55:58.270425  111177 pv_controller.go:782] updating PersistentVolume[pv-w-cannotbind-2]: phase Available already set
I1010 18:55:58.270510  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (2.443232ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35114]
I1010 18:55:58.271078  111177 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind-2", version 33122
I1010 18:55:58.271119  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:55:58.271173  111177 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind-2]: no volume found
I1010 18:55:58.271199  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind-2] status: set phase Pending
I1010 18:55:58.271219  111177 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind-2] status: phase Pending already set
I1010 18:55:58.271268  111177 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c", Name:"pvc-w-cannotbind-2", UID:"47c224f0-2ad3-47e9-8150-b2d7f284678b", APIVersion:"v1", ResourceVersion:"33122", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1010 18:55:58.272292  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (3.193065ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35006]
I1010 18:55:58.274005  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods: (2.657812ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35114]
I1010 18:55:58.274539  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind-2
I1010 18:55:58.274560  111177 scheduler.go:598] Attempting to schedule pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind-2
I1010 18:55:58.275055  111177 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind-2", PVC "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind-2" on node "node-1"
I1010 18:55:58.275071  111177 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind-2", PVC "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind-2" on node "node-2"
I1010 18:55:58.275092  111177 scheduler_binder.go:725] storage class "wait-tc5p" of claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind-2" does not support dynamic provisioning
I1010 18:55:58.275104  111177 scheduler_binder.go:725] storage class "wait-tc5p" of claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind-2" does not support dynamic provisioning
I1010 18:55:58.275202  111177 factory.go:645] Unable to schedule volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind-2: no fit: 0/2 nodes are available: 2 node(s) didn't find available persistent volumes to bind.; waiting
I1010 18:55:58.275287  111177 scheduler.go:746] Updating pod condition for volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind-2 to (PodScheduled==False, Reason=Unschedulable)
I1010 18:55:58.275360  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (2.299139ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35006]
I1010 18:55:58.278622  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-cannotbind-2/status: (2.906045ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35122]
I1010 18:55:58.279360  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-cannotbind-2: (3.549469ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35114]
I1010 18:55:58.279631  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (2.135567ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35006]
E1010 18:55:58.280012  111177 factory.go:685] pod is already present in the activeQ
I1010 18:55:58.281172  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-cannotbind-2: (1.30919ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35122]
I1010 18:55:58.281456  111177 generic_scheduler.go:325] Preemption will not help schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind-2 on any node.
I1010 18:55:58.283169  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind-2
I1010 18:55:58.284309  111177 scheduler.go:598] Attempting to schedule pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind-2
I1010 18:55:58.284967  111177 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind-2", PVC "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind-2" on node "node-2"
I1010 18:55:58.284995  111177 scheduler_binder.go:725] storage class "wait-tc5p" of claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind-2" does not support dynamic provisioning
I1010 18:55:58.285125  111177 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind-2", PVC "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind-2" on node "node-1"
I1010 18:55:58.285234  111177 scheduler_binder.go:725] storage class "wait-tc5p" of claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind-2" does not support dynamic provisioning
I1010 18:55:58.285419  111177 factory.go:645] Unable to schedule volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind-2: no fit: 0/2 nodes are available: 2 node(s) didn't find available persistent volumes to bind.; waiting
I1010 18:55:58.285510  111177 scheduler.go:746] Updating pod condition for volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind-2 to (PodScheduled==False, Reason=Unschedulable)
I1010 18:55:58.289293  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-cannotbind-2: (2.448217ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35114]
I1010 18:55:58.289473  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (3.081059ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35124]
I1010 18:55:58.289749  111177 generic_scheduler.go:325] Preemption will not help schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind-2 on any node.
I1010 18:55:58.290290  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-cannotbind-2: (3.996483ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35006]
I1010 18:55:58.378243  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-cannotbind-2: (3.153445ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35124]
I1010 18:55:58.382153  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-w-cannotbind-1: (2.308022ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35124]
I1010 18:55:58.386100  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-w-cannotbind-2: (2.255554ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35124]
I1010 18:55:58.388669  111177 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-cannotbind-1: (1.847852ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35124]
I1010 18:55:58.391255  111177 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-cannotbind-2: (1.870457ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35124]
I1010 18:55:58.399682  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind-2
I1010 18:55:58.399834  111177 scheduler.go:594] Skip schedule deleting pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind-2
I1010 18:55:58.403473  111177 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods: (11.455454ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35124]
I1010 18:55:58.404087  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (3.724384ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35114]
I1010 18:55:58.410006  111177 pv_controller_base.go:265] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind-1" deleted
I1010 18:55:58.414930  111177 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (10.348316ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35124]
I1010 18:55:58.415122  111177 pv_controller_base.go:265] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind-2" deleted
I1010 18:55:58.435123  111177 pv_controller_base.go:216] volume "pv-w-cannotbind-1" deleted
I1010 18:55:58.443845  111177 pv_controller_base.go:216] volume "pv-w-cannotbind-2" deleted
I1010 18:55:58.444264  111177 httplog.go:90] DELETE /api/v1/persistentvolumes: (28.685005ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35124]
I1010 18:55:58.463326  111177 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (16.1614ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35124]
I1010 18:55:58.468866  111177 volume_binding_test.go:191] Running test immediate pv prebound
I1010 18:55:58.474115  111177 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (4.356094ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35124]
I1010 18:55:58.477859  111177 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.391134ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35124]
I1010 18:55:58.481667  111177 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-i-prebound", version 33161
I1010 18:55:58.481770  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-i-prebound]: phase: Pending, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound (uid: )", boundByController: false
I1010 18:55:58.481780  111177 pv_controller.go:508] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound
I1010 18:55:58.481790  111177 pv_controller.go:779] updating PersistentVolume[pv-i-prebound]: set phase Available
I1010 18:55:58.481680  111177 httplog.go:90] POST /api/v1/persistentvolumes: (3.184933ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35124]
I1010 18:55:58.484839  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (2.207115ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35114]
I1010 18:55:58.485192  111177 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound", version 33163
I1010 18:55:58.485365  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:55:58.485911  111177 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Pending, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound (uid: )", boundByController: false
I1010 18:55:58.486007  111177 pv_controller.go:933] binding volume "pv-i-prebound" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound"
I1010 18:55:58.486162  111177 pv_controller.go:831] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound"
I1010 18:55:58.485775  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (3.359357ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35124]
I1010 18:55:58.486241  111177 pv_controller.go:851] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound" bound to volume "pv-i-prebound"
I1010 18:55:58.486479  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 33164
I1010 18:55:58.486506  111177 pv_controller.go:800] volume "pv-i-prebound" entered phase "Available"
I1010 18:55:58.486553  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 33164
I1010 18:55:58.486579  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound (uid: )", boundByController: false
I1010 18:55:58.486586  111177 pv_controller.go:508] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound
I1010 18:55:58.486594  111177 pv_controller.go:779] updating PersistentVolume[pv-i-prebound]: set phase Available
I1010 18:55:58.486602  111177 pv_controller.go:782] updating PersistentVolume[pv-i-prebound]: phase Available already set
I1010 18:55:58.491027  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pv-prebound
I1010 18:55:58.491076  111177 scheduler.go:598] Attempting to schedule pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pv-prebound
E1010 18:55:58.491520  111177 factory.go:661] Error scheduling volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pv-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I1010 18:55:58.491572  111177 scheduler.go:746] Updating pod condition for volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pv-prebound to (PodScheduled==False, Reason=Unschedulable)
I1010 18:55:58.491694  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods: (5.538751ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35114]
I1010 18:55:58.496056  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound/status: (2.72811ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
E1010 18:55:58.496330  111177 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I1010 18:55:58.496459  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pv-prebound
I1010 18:55:58.496472  111177 scheduler.go:598] Attempting to schedule pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pv-prebound
E1010 18:55:58.496796  111177 factory.go:661] Error scheduling volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pv-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I1010 18:55:58.496850  111177 scheduler.go:746] Updating pod condition for volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pv-prebound to (PodScheduled==False, Reason=Unschedulable)
E1010 18:55:58.496874  111177 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I1010 18:55:58.496975  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (2.925614ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:55:58.498845  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (1.603808ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:55:58.501960  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (3.461463ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:55:58.505316  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (7.517193ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35114]
E1010 18:55:58.505948  111177 factory.go:685] pod is already present in unschedulableQ
I1010 18:55:58.511665  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound: (23.323441ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35124]
I1010 18:55:58.512938  111177 pv_controller.go:854] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 18:55:58.512981  111177 pv_controller.go:936] error binding volume "pv-i-prebound" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 18:55:58.513054  111177 pv_controller_base.go:251] could not sync claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 18:55:58.595225  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.349227ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:55:58.696270  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.376657ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:55:58.794752  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (1.892745ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:55:58.895248  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.539977ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:55:58.996024  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.110169ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:55:59.095532  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.640936ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:55:59.196597  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.748552ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:55:59.295159  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.327437ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:55:59.396295  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.264202ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:55:59.495594  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.73336ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:55:59.595584  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.788374ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:55:59.698593  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.625999ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:55:59.795804  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.847324ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:55:59.894920  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.183684ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:55:59.995942  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.219869ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:00.095235  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.551692ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:00.195796  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.866238ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:00.295429  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.469652ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:00.395381  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.435522ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:00.494687  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.021671ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:00.595161  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.477778ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:00.695564  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.87244ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:00.796555  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.559862ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:00.895621  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.803119ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:00.996537  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.863364ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:01.095586  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.756556ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:01.197340  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (4.401251ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:01.295838  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.045314ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:01.395604  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.736318ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:01.500136  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (7.179165ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:01.595299  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.482288ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:01.695871  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.035793ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:01.795219  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.401602ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:01.895026  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.133864ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:02.002838  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (9.536541ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:02.095877  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.937861ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:02.195346  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.593285ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:02.295958  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.072601ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:02.394967  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.366078ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:02.495696  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.841191ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:02.595290  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.279461ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:02.695140  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.452001ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:02.795539  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.661409ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:02.895930  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.801832ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:02.994871  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.139197ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:03.094948  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.251152ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:03.196656  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.911107ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:03.294896  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.169739ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:03.397319  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (4.241474ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:03.495102  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.287692ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:03.595024  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.311467ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:03.695426  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.504645ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:03.795602  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.686457ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:03.897523  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (4.228926ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:03.995367  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.301566ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:04.094788  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (1.997798ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:04.195608  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.728831ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:04.295930  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.043563ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:04.395183  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.274668ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:04.496466  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.312242ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:04.600070  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (7.222071ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:04.696178  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.202486ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:04.794859  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.16007ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:04.896707  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.587095ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:04.995260  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.32718ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:05.095355  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.619699ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:05.195688  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.805087ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:05.295174  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.29944ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:05.395911  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.119856ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:05.495034  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.143971ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:05.596413  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.629847ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:05.696487  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.471491ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:05.795042  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.134341ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:05.905939  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (13.143663ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:05.997381  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (4.505217ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:06.094439  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (1.925742ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:06.194542  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (1.828419ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:06.295969  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.278294ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:06.395755  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.906041ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:06.495477  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.658042ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:06.527109  111177 httplog.go:90] GET /api/v1/namespaces/default: (2.757412ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:06.530638  111177 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (3.025652ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:06.533634  111177 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.164583ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:06.594898  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.147764ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:06.696019  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.334695ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:06.794564  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (1.905243ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:06.896899  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.995395ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:06.995699  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.726598ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:07.096308  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.467759ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:07.195082  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.409425ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:07.294869  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.174216ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:07.396087  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.034932ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:07.495975  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.982244ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:07.595974  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.713414ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:07.694550  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (1.831176ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:07.796049  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.064943ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:07.895879  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.042396ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:07.995828  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.807132ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:08.094624  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (1.989548ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:08.194527  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (1.796637ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:08.295403  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.435657ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:08.395133  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.302568ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:08.498296  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (5.649176ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:08.594960  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.185911ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:08.694711  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (1.997565ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:08.795217  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.396627ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:08.894919  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.113072ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:08.995268  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.594877ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:09.094929  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.286737ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:09.195902  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.936619ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:09.298390  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (5.597534ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:09.396037  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.248616ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:09.496524  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.559848ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:09.596361  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.320035ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:09.695903  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.772791ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:09.794988  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.082512ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:09.895336  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.416001ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:09.997536  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (4.459524ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:10.096757  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.563675ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:10.197435  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.906683ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:10.295108  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.432347ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:10.395196  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.528658ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:10.508361  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (15.582445ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:10.595410  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.476675ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:10.696268  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.35642ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:10.795101  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.382559ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:10.895879  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.929568ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:10.995159  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.30411ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:11.094840  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.129744ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:11.195142  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.315762ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:11.294913  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.130534ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:11.394774  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (1.895498ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:11.498792  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (6.062871ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:11.598612  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (5.727219ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:11.695814  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.885451ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:11.795269  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.353646ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:11.818068  111177 pv_controller_base.go:426] resyncing PV controller
I1010 18:56:11.818233  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 33164
I1010 18:56:11.818324  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound (uid: )", boundByController: false
I1010 18:56:11.818336  111177 pv_controller.go:508] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound
I1010 18:56:11.818346  111177 pv_controller.go:779] updating PersistentVolume[pv-i-prebound]: set phase Available
I1010 18:56:11.818355  111177 pv_controller.go:782] updating PersistentVolume[pv-i-prebound]: phase Available already set
I1010 18:56:11.818388  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound" with version 33163
I1010 18:56:11.818405  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:56:11.818452  111177 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Available, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound (uid: )", boundByController: false
I1010 18:56:11.818470  111177 pv_controller.go:933] binding volume "pv-i-prebound" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound"
I1010 18:56:11.818485  111177 pv_controller.go:831] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound"
I1010 18:56:11.818528  111177 pv_controller.go:851] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound" bound to volume "pv-i-prebound"
I1010 18:56:11.823666  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound: (4.378658ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:11.824000  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pv-prebound
I1010 18:56:11.824035  111177 scheduler.go:598] Attempting to schedule pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pv-prebound
I1010 18:56:11.824145  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 35442
I1010 18:56:11.824210  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound (uid: 2473fcea-c3e9-414d-938f-41db529634e4)", boundByController: false
I1010 18:56:11.824230  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound
I1010 18:56:11.824255  111177 pv_controller.go:557] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:56:11.824274  111177 pv_controller.go:608] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
E1010 18:56:11.824493  111177 factory.go:661] Error scheduling volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pv-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I1010 18:56:11.824549  111177 scheduler.go:746] Updating pod condition for volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pv-prebound to (PodScheduled==False, Reason=Unschedulable)
E1010 18:56:11.824567  111177 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I1010 18:56:11.824568  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 35442
I1010 18:56:11.824595  111177 pv_controller.go:864] updating PersistentVolume[pv-i-prebound]: bound to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound"
I1010 18:56:11.824608  111177 pv_controller.go:779] updating PersistentVolume[pv-i-prebound]: set phase Bound
I1010 18:56:11.832433  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (7.500332ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:11.832516  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (6.676036ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38898]
I1010 18:56:11.832665  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 35445
I1010 18:56:11.832703  111177 pv_controller.go:800] volume "pv-i-prebound" entered phase "Bound"
I1010 18:56:11.832719  111177 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I1010 18:56:11.832722  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 35445
I1010 18:56:11.832775  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound (uid: 2473fcea-c3e9-414d-938f-41db529634e4)", boundByController: false
I1010 18:56:11.832787  111177 pv_controller.go:903] volume "pv-i-prebound" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound"
I1010 18:56:11.832790  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound
I1010 18:56:11.832809  111177 pv_controller.go:557] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:56:11.832824  111177 pv_controller.go:608] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I1010 18:56:11.833757  111177 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events/pod-i-pv-prebound.15cc5e07ffeae3e0: (7.505574ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:11.836457  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-i-pv-prebound: (3.435928ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38898]
I1010 18:56:11.836717  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound" with version 35448
I1010 18:56:11.836805  111177 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound]: bound to "pv-i-prebound"
I1010 18:56:11.836818  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound] status: set phase Bound
I1010 18:56:11.839612  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-i-pv-prebound/status: (2.51142ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:11.839889  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound" with version 35450
I1010 18:56:11.839911  111177 pv_controller.go:744] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound" entered phase "Bound"
I1010 18:56:11.839926  111177 pv_controller.go:959] volume "pv-i-prebound" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound"
I1010 18:56:11.839946  111177 pv_controller.go:960] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound (uid: 2473fcea-c3e9-414d-938f-41db529634e4)", boundByController: false
I1010 18:56:11.839957  111177 pv_controller.go:961] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I1010 18:56:11.839991  111177 pv_controller_base.go:533] storeObjectUpdate: ignoring claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound" version 35448
I1010 18:56:11.841203  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound" with version 35450
I1010 18:56:11.841227  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound]: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I1010 18:56:11.841245  111177 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound (uid: 2473fcea-c3e9-414d-938f-41db529634e4)", boundByController: false
I1010 18:56:11.841255  111177 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound]: claim is already correctly bound
I1010 18:56:11.841266  111177 pv_controller.go:933] binding volume "pv-i-prebound" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound"
I1010 18:56:11.841274  111177 pv_controller.go:831] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound"
I1010 18:56:11.841289  111177 pv_controller.go:843] updating PersistentVolume[pv-i-prebound]: already bound to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound"
I1010 18:56:11.841296  111177 pv_controller.go:779] updating PersistentVolume[pv-i-prebound]: set phase Bound
I1010 18:56:11.841303  111177 pv_controller.go:782] updating PersistentVolume[pv-i-prebound]: phase Bound already set
I1010 18:56:11.841310  111177 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I1010 18:56:11.841326  111177 pv_controller.go:918] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound]: already bound to "pv-i-prebound"
I1010 18:56:11.841334  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound] status: set phase Bound
I1010 18:56:11.841348  111177 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound] status: phase Bound already set
I1010 18:56:11.841357  111177 pv_controller.go:959] volume "pv-i-prebound" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound"
I1010 18:56:11.841369  111177 pv_controller.go:960] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound (uid: 2473fcea-c3e9-414d-938f-41db529634e4)", boundByController: false
I1010 18:56:11.841378  111177 pv_controller.go:961] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I1010 18:56:11.894400  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (1.679864ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:11.995506  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.538677ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:12.097677  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.805386ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:12.195787  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.791644ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:12.295149  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.4088ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:12.394835  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.143756ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:12.495302  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.239351ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:12.595708  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.87773ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:12.695243  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.248812ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:12.795025  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.207991ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:12.895031  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.215659ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:12.994832  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.114247ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.095098  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.404987ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.197096  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (4.427967ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.295465  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.580245ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.395808  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (2.708519ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.494130  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (1.545828ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.594672  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (1.974975ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.610777  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pv-prebound
I1010 18:56:13.610815  111177 scheduler.go:598] Attempting to schedule pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pv-prebound
I1010 18:56:13.611049  111177 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pv-prebound" match with Node "node-1"
I1010 18:56:13.611256  111177 scheduler_binder.go:653] PersistentVolume "pv-i-prebound", Node "node-2" mismatch for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pv-prebound": No matching NodeSelectorTerms
I1010 18:56:13.611383  111177 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pv-prebound", node "node-1"
I1010 18:56:13.611410  111177 scheduler_binder.go:267] AssumePodVolumes for pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pv-prebound", node "node-1": all PVCs bound and nothing to do
I1010 18:56:13.611482  111177 factory.go:710] Attempting to bind pod-i-pv-prebound to node-1
I1010 18:56:13.614700  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound/binding: (2.681664ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.615151  111177 scheduler.go:730] pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pv-prebound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I1010 18:56:13.618593  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (2.899687ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.697105  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pv-prebound: (3.819344ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.700908  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-i-pv-prebound: (2.781273ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.703285  111177 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-prebound: (1.750528ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.713916  111177 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods: (9.320692ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.720088  111177 pv_controller_base.go:265] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound" deleted
I1010 18:56:13.720280  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 35445
I1010 18:56:13.720452  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound (uid: 2473fcea-c3e9-414d-938f-41db529634e4)", boundByController: false
I1010 18:56:13.720588  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound
I1010 18:56:13.720708  111177 pv_controller.go:549] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound not found
I1010 18:56:13.720971  111177 pv_controller.go:577] volume "pv-i-prebound" is released and reclaim policy "Retain" will be executed
I1010 18:56:13.721095  111177 pv_controller.go:779] updating PersistentVolume[pv-i-prebound]: set phase Released
I1010 18:56:13.720465  111177 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (5.972343ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.726695  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (4.504932ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:13.728766  111177 httplog.go:90] DELETE /api/v1/persistentvolumes: (6.847283ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.729651  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 36014
I1010 18:56:13.732016  111177 pv_controller.go:800] volume "pv-i-prebound" entered phase "Released"
I1010 18:56:13.732046  111177 pv_controller.go:1013] reclaimVolume[pv-i-prebound]: policy is Retain, nothing to do
I1010 18:56:13.732096  111177 pv_controller_base.go:216] volume "pv-i-prebound" deleted
I1010 18:56:13.732140  111177 pv_controller_base.go:403] deletion of claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-pv-prebound" was already processed
I1010 18:56:13.737932  111177 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (7.956366ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.738188  111177 volume_binding_test.go:191] Running test immediate cannot bind
I1010 18:56:13.742074  111177 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.60936ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.744967  111177 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.340555ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.748817  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (3.106203ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.748846  111177 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-cannotbind", version 36022
I1010 18:56:13.749269  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-cannotbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:56:13.749417  111177 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-cannotbind]: no volume found
I1010 18:56:13.749635  111177 pv_controller.go:1328] provisionClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-cannotbind]: started
E1010 18:56:13.749866  111177 pv_controller.go:1333] error finding provisioning plugin for claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-cannotbind: no volume plugin matched
I1010 18:56:13.750247  111177 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c", Name:"pvc-i-cannotbind", UID:"8dba7754-0935-4943-a7bb-8e7308ac6d95", APIVersion:"v1", ResourceVersion:"36022", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' no volume plugin matched
I1010 18:56:13.753079  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (2.605896ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:13.753456  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods: (3.119074ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.753869  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-cannotbind
I1010 18:56:13.753886  111177 scheduler.go:598] Attempting to schedule pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-cannotbind
E1010 18:56:13.754151  111177 factory.go:661] Error scheduling volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-cannotbind: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I1010 18:56:13.754202  111177 scheduler.go:746] Updating pod condition for volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-cannotbind to (PodScheduled==False, Reason=Unschedulable)
I1010 18:56:13.756193  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-cannotbind: (1.64076ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:13.756927  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-cannotbind/status: (2.41474ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
E1010 18:56:13.757170  111177 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I1010 18:56:13.758323  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (2.966642ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39346]
I1010 18:56:13.857580  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-cannotbind: (2.663407ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.861021  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-i-cannotbind: (2.528259ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.870387  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-cannotbind
I1010 18:56:13.870453  111177 scheduler.go:594] Skip schedule deleting pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-cannotbind
I1010 18:56:13.874222  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (3.03143ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:13.879678  111177 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods: (17.931084ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.892557  111177 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (12.244603ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.893178  111177 pv_controller_base.go:265] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-cannotbind" deleted
I1010 18:56:13.895769  111177 httplog.go:90] DELETE /api/v1/persistentvolumes: (2.029943ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.908637  111177 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (12.200479ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.908974  111177 volume_binding_test.go:191] Running test immediate pvc prebound
I1010 18:56:13.912515  111177 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (3.142352ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.916183  111177 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (3.130471ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.919954  111177 httplog.go:90] POST /api/v1/persistentvolumes: (3.13955ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.920189  111177 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-i-pvc-prebound", version 36060
I1010 18:56:13.920229  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Pending, bound to: "", boundByController: false
I1010 18:56:13.920250  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is unused
I1010 18:56:13.920259  111177 pv_controller.go:779] updating PersistentVolume[pv-i-pvc-prebound]: set phase Available
I1010 18:56:13.922835  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (2.265529ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:13.923589  111177 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound", version 36061
I1010 18:56:13.923629  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound]: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I1010 18:56:13.923646  111177 pv_controller.go:349] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested
I1010 18:56:13.923666  111177 pv_controller.go:368] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested and found: phase: Pending, bound to: "", boundByController: false
I1010 18:56:13.923684  111177 pv_controller.go:372] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound]: volume is unbound, binding
I1010 18:56:13.923704  111177 pv_controller.go:933] binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound"
I1010 18:56:13.923722  111177 pv_controller.go:831] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound"
I1010 18:56:13.923778  111177 pv_controller.go:851] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound" bound to volume "pv-i-pvc-prebound"
I1010 18:56:13.925142  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (4.620789ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
I1010 18:56:13.925626  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 36062
I1010 18:56:13.925666  111177 pv_controller.go:800] volume "pv-i-pvc-prebound" entered phase "Available"
I1010 18:56:13.926812  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound: (2.471658ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:13.927162  111177 pv_controller.go:854] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 18:56:13.927359  111177 pv_controller.go:936] error binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 18:56:13.927462  111177 pv_controller_base.go:251] could not sync claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 18:56:13.926369  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 36062
I1010 18:56:13.929072  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I1010 18:56:13.929097  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is unused
I1010 18:56:13.929107  111177 pv_controller.go:779] updating PersistentVolume[pv-i-pvc-prebound]: set phase Available
I1010 18:56:13.929251  111177 pv_controller.go:782] updating PersistentVolume[pv-i-pvc-prebound]: phase Available already set
I1010 18:56:13.936383  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pvc-prebound
I1010 18:56:13.936437  111177 scheduler.go:598] Attempting to schedule pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pvc-prebound
E1010 18:56:13.936709  111177 factory.go:661] Error scheduling volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pvc-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I1010 18:56:13.936939  111177 scheduler.go:746] Updating pod condition for volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
I1010 18:56:13.939772  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.205126ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:13.940402  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods: (12.31005ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:13.941462  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound/status: (2.959158ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35182]
E1010 18:56:13.942086  111177 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I1010 18:56:13.943115  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (4.679699ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39382]
I1010 18:56:14.044815  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.022229ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:14.144406  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.439955ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:14.245841  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.926846ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:14.344621  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.605704ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:14.443956  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.171469ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:14.544026  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.295843ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:14.644650  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.72787ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:14.744500  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.61686ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:14.845470  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.581568ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:14.943813  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.122741ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:15.043963  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.26604ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:15.144080  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.307301ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:15.245191  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.135868ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:15.346104  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (4.169199ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:15.444035  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.325909ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:15.544341  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.508965ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:15.644618  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.94821ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:15.744709  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.763589ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:15.844376  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.402636ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:15.944427  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.80277ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:16.044614  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.78523ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:16.144269  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.475048ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:16.244431  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.607942ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:16.344211  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.329941ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:16.444386  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.548909ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:16.527008  111177 httplog.go:90] GET /api/v1/namespaces/default: (2.505456ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:16.530222  111177 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.424663ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:16.532952  111177 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.93468ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:16.545391  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.977371ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:16.645005  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.202331ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:16.745123  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.194589ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:16.845378  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.399473ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:16.944588  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.889487ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:17.048603  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (6.749286ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:17.145291  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.411114ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:17.244792  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.819956ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:17.344292  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.488042ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:17.454075  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (12.261909ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:17.544894  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.5264ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:17.651881  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (9.560308ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:17.744514  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.877677ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:17.843887  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.10377ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:17.944623  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.912332ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:18.044433  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.696915ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:18.143860  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.145356ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:18.244270  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.549437ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:18.344400  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.674655ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:18.443652  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (1.954634ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:18.544406  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.676456ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:18.644065  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.222591ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:18.744338  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.605541ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:18.844875  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.147462ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:18.944468  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.660385ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:19.047862  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (6.156858ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:19.144236  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.590317ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:19.243858  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (1.870868ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:19.344088  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.366256ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:19.444677  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.837205ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:19.544316  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.649277ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:19.643916  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.259244ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:19.744470  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.801608ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:19.844945  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.875655ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:19.946806  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.471282ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:20.045598  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.48547ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:20.144007  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.32146ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:20.245198  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.062464ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:20.348304  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.108477ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:20.445221  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.305968ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:20.561083  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (14.28793ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:20.644209  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.514609ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:20.747988  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (6.1897ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:20.844948  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.112282ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:20.945016  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.264752ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:21.044208  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.393035ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:21.143539  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (1.828774ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:21.243819  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.093055ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:21.344529  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.70053ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:21.445092  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.215182ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:21.544786  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.909554ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:21.644462  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.367852ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:21.744949  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.229405ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:21.845551  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.584565ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:21.945017  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.340321ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:22.043869  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.153775ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:22.144997  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.203416ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:22.245497  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.203346ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:22.344079  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.23804ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:22.445277  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.361199ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:22.545560  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.602508ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:22.644664  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.850169ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:22.743869  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.012983ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:22.844784  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.954209ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:22.943914  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.22167ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:23.044028  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.342924ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:23.145138  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.525911ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:23.244585  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.120175ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:23.344450  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.558295ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:23.444063  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.267514ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:23.545298  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.449398ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:23.645584  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.826932ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:23.744515  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.805306ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:23.847305  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.759605ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:23.944049  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.264982ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:24.044783  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.941519ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:24.144836  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.86008ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:24.244094  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.379556ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:24.343862  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (1.961921ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:24.443717  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.025397ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:24.544600  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.737492ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:24.644838  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.998088ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:24.744554  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.843667ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:24.845426  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.13885ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:24.945477  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.591127ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:25.044620  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.354613ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:25.144516  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.647188ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:25.248823  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (6.790956ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:25.344291  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.448157ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:25.444373  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.518766ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:25.544567  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.639513ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:25.644471  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.566851ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:25.745070  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.188579ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:25.844802  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.939123ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:25.943971  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.348548ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:26.044933  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.184026ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:26.144815  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.923954ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:26.243673  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (1.955279ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:26.344925  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.343953ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:26.444627  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.24624ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:26.526845  111177 httplog.go:90] GET /api/v1/namespaces/default: (2.108188ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:26.529694  111177 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.095498ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:26.532247  111177 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.788997ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:26.544355  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.5353ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:26.644929  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.001709ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:26.745648  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.669109ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:26.818461  111177 pv_controller_base.go:426] resyncing PV controller
I1010 18:56:26.818635  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 36062
I1010 18:56:26.818688  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I1010 18:56:26.818712  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is unused
I1010 18:56:26.818740  111177 pv_controller.go:779] updating PersistentVolume[pv-i-pvc-prebound]: set phase Available
I1010 18:56:26.818752  111177 pv_controller.go:782] updating PersistentVolume[pv-i-pvc-prebound]: phase Available already set
I1010 18:56:26.818794  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound" with version 36061
I1010 18:56:26.818821  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound]: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I1010 18:56:26.818840  111177 pv_controller.go:349] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested
I1010 18:56:26.818858  111177 pv_controller.go:368] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested and found: phase: Available, bound to: "", boundByController: false
I1010 18:56:26.818881  111177 pv_controller.go:372] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound]: volume is unbound, binding
I1010 18:56:26.818939  111177 pv_controller.go:933] binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound"
I1010 18:56:26.818953  111177 pv_controller.go:831] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound"
I1010 18:56:26.819008  111177 pv_controller.go:851] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound" bound to volume "pv-i-pvc-prebound"
I1010 18:56:26.824550  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 38288
I1010 18:56:26.824610  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Available, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound (uid: ab221792-fe8a-42ef-85a1-2d8bb221fb43)", boundByController: true
I1010 18:56:26.824627  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound
I1010 18:56:26.824695  111177 pv_controller.go:557] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound found: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I1010 18:56:26.824775  111177 pv_controller.go:621] synchronizing PersistentVolume[pv-i-pvc-prebound]: all is bound
I1010 18:56:26.824789  111177 pv_controller.go:779] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I1010 18:56:26.824553  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pvc-prebound
I1010 18:56:26.825157  111177 scheduler.go:598] Attempting to schedule pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pvc-prebound
E1010 18:56:26.825576  111177 factory.go:661] Error scheduling volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pvc-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I1010 18:56:26.825634  111177 scheduler.go:746] Updating pod condition for volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
E1010 18:56:26.825670  111177 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I1010 18:56:26.826056  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound: (6.404542ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:26.826402  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 38288
I1010 18:56:26.826427  111177 pv_controller.go:864] updating PersistentVolume[pv-i-pvc-prebound]: bound to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound"
I1010 18:56:26.826440  111177 pv_controller.go:779] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I1010 18:56:26.828821  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (3.372667ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:26.829142  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 38292
I1010 18:56:26.829170  111177 pv_controller.go:800] volume "pv-i-pvc-prebound" entered phase "Bound"
I1010 18:56:26.829203  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 38292
I1010 18:56:26.829230  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound (uid: ab221792-fe8a-42ef-85a1-2d8bb221fb43)", boundByController: true
I1010 18:56:26.829256  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound
I1010 18:56:26.829281  111177 pv_controller.go:557] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound found: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I1010 18:56:26.829293  111177 pv_controller.go:621] synchronizing PersistentVolume[pv-i-pvc-prebound]: all is bound
I1010 18:56:26.829305  111177 pv_controller.go:779] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I1010 18:56:26.829324  111177 pv_controller.go:782] updating PersistentVolume[pv-i-pvc-prebound]: phase Bound already set
I1010 18:56:26.830637  111177 store.go:365] GuaranteedUpdate of /e363539f-4a5e-4e74-9ddc-5eb895b1e875/persistentvolumes/pv-i-pvc-prebound failed because of a conflict, going to retry
I1010 18:56:26.830971  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (4.014505ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42428]
I1010 18:56:26.831487  111177 pv_controller.go:792] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound failed: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 18:56:26.831514  111177 pv_controller.go:942] error binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound": failed saving the volume status: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 18:56:26.831547  111177 pv_controller_base.go:251] could not sync claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 18:56:26.833815  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (7.017169ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35180]
I1010 18:56:26.835073  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (6.755338ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42426]
I1010 18:56:26.846430  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.931509ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:26.944075  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.346893ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:27.045559  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.487173ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:27.144775  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.851072ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:27.244010  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.253926ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:27.344507  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.526414ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:27.445919  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.856755ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:27.546121  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (4.214104ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:27.643801  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.13884ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:27.744264  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.653428ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:27.845926  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.891931ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:27.945067  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.973758ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:28.047168  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (5.006662ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:28.144175  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.342968ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:28.243474  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (1.780931ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:28.344384  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.559643ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:28.446138  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (4.26774ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:28.544241  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.229325ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:28.614300  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pvc-prebound
I1010 18:56:28.614355  111177 scheduler.go:598] Attempting to schedule pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pvc-prebound
E1010 18:56:28.614834  111177 factory.go:661] Error scheduling volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pvc-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I1010 18:56:28.614916  111177 scheduler.go:746] Updating pod condition for volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
E1010 18:56:28.614941  111177 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I1010 18:56:28.620717  111177 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events/pod-i-pvc-prebound.15cc5e0e9872027a: (3.777743ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:28.621916  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (5.416858ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:28.644322  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.583833ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:28.744747  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.906832ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:28.845539  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.583855ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:28.944889  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.125726ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:29.046644  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (4.755079ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:29.144833  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.148277ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:29.243862  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.113932ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:29.355707  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (13.983046ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:29.448290  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (6.2557ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:29.546995  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.626731ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:29.644613  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.794441ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:29.744419  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.746839ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:29.843887  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.061971ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:29.945000  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.14104ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:30.045653  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.862267ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:30.144201  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.490648ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:30.244032  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.13704ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:30.344081  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.269722ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:30.443910  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.175946ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:30.544985  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.782008ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:30.643954  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.230892ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:30.746526  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (4.787488ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:30.843883  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.10381ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:30.944449  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.650362ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:31.043890  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.286889ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:31.144588  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.653383ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:31.244053  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.299252ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:31.344436  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.540985ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:31.444682  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.615118ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:31.545934  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.612549ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:31.646671  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (4.685064ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:31.744312  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.485142ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:31.843866  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.197453ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:31.944938  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.197886ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:32.044080  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.258958ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:32.143974  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.18813ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:32.244248  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.5059ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:32.344529  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.831743ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:32.444171  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.22069ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:32.544551  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.585687ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:32.644803  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.938561ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:32.745791  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.200695ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:32.845889  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (4.244056ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:32.944077  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.24226ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:33.043982  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.243278ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:33.144887  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.044533ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:33.245062  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.121696ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:33.345054  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.055334ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:33.444878  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.004612ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:33.543567  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (1.893783ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:33.644501  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.627408ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:33.745306  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.60094ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:33.843904  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.200525ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:33.944360  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.725159ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:34.045643  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.856885ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:34.144549  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.247697ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:34.244540  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.714537ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:34.344638  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.903839ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:34.443898  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.272667ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:34.543915  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.189238ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:34.644190  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.559435ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:34.744334  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.495447ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:34.844804  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.955111ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:34.944323  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.56779ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:35.044081  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.139196ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:35.143811  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.139553ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:35.244838  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.016511ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:35.344049  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.224454ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:35.444588  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.672422ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:35.544360  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.489789ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:35.645094  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.285026ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:35.745560  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.710111ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:35.844589  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.877422ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:35.944067  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.456118ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:36.044260  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.551724ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:36.144615  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.922589ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:36.245403  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.573824ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:36.344678  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.89135ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:36.445250  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.496142ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:36.527686  111177 httplog.go:90] GET /api/v1/namespaces/default: (2.380925ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:36.530203  111177 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.904031ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:36.532078  111177 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.402331ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:36.544132  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.494833ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:36.644309  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.545046ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:36.744144  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.496296ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:36.844103  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.290464ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:36.944388  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.711406ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:37.045402  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.460979ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:37.144944  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.108987ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:37.243935  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.325651ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:37.343564  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (1.900905ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:37.444281  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.531236ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:37.544407  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.476111ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:37.644420  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.499922ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:37.744582  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.668936ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:37.844581  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.823899ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:37.944352  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.663119ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:38.043819  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.214452ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:38.144014  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.363967ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:38.244846  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.064287ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:38.344801  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.878996ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:38.444257  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.396439ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:38.544553  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.68326ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:38.644817  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.112253ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:38.743979  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.176895ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:38.843805  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.098909ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:38.944694  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.052905ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:39.044462  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.647609ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:39.144338  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.635894ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:39.244780  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.920481ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:39.344821  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.086347ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:39.444537  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.148893ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:39.544898  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.157398ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:39.644072  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.368847ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:39.743962  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.058495ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:39.844305  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.459764ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:39.944687  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.958451ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:40.045137  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.256222ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:40.144340  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.6873ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:40.244239  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.405609ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:40.343856  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.140046ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:40.444633  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.773081ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:40.544433  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.610159ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:40.644956  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.267404ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:40.745447  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.712184ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:40.844376  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.325642ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:40.944277  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.43863ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:41.044611  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.720106ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:41.145563  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.647068ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:41.245136  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (3.222103ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:41.344359  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.529549ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:41.444159  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.443723ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:41.544002  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.275953ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:41.644816  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.892353ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:41.744326  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.603606ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:41.818913  111177 pv_controller_base.go:426] resyncing PV controller
I1010 18:56:41.819103  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound" with version 36061
I1010 18:56:41.819105  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 38292
I1010 18:56:41.819141  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound]: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I1010 18:56:41.819161  111177 pv_controller.go:349] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested
I1010 18:56:41.819173  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound (uid: ab221792-fe8a-42ef-85a1-2d8bb221fb43)", boundByController: true
I1010 18:56:41.819181  111177 pv_controller.go:368] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested and found: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound (uid: ab221792-fe8a-42ef-85a1-2d8bb221fb43)", boundByController: true
I1010 18:56:41.819188  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound
I1010 18:56:41.819194  111177 pv_controller.go:392] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound]: volume already bound, finishing the binding
I1010 18:56:41.819207  111177 pv_controller.go:933] binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound"
I1010 18:56:41.819211  111177 pv_controller.go:557] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound found: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I1010 18:56:41.819226  111177 pv_controller.go:621] synchronizing PersistentVolume[pv-i-pvc-prebound]: all is bound
I1010 18:56:41.819234  111177 pv_controller.go:779] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I1010 18:56:41.819238  111177 pv_controller.go:831] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound"
I1010 18:56:41.819243  111177 pv_controller.go:782] updating PersistentVolume[pv-i-pvc-prebound]: phase Bound already set
I1010 18:56:41.819268  111177 pv_controller.go:843] updating PersistentVolume[pv-i-pvc-prebound]: already bound to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound"
I1010 18:56:41.819277  111177 pv_controller.go:779] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I1010 18:56:41.819283  111177 pv_controller.go:782] updating PersistentVolume[pv-i-pvc-prebound]: phase Bound already set
I1010 18:56:41.819291  111177 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound]: binding to "pv-i-pvc-prebound"
I1010 18:56:41.819304  111177 pv_controller.go:903] volume "pv-i-pvc-prebound" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound"
I1010 18:56:41.822618  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-i-prebound: (2.928781ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:41.823215  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound" with version 39815
I1010 18:56:41.823261  111177 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound]: bound to "pv-i-pvc-prebound"
I1010 18:56:41.823266  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pvc-prebound
I1010 18:56:41.823276  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound] status: set phase Bound
I1010 18:56:41.823286  111177 scheduler.go:598] Attempting to schedule pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pvc-prebound
I1010 18:56:41.823446  111177 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pvc-prebound" match with Node "node-1"
I1010 18:56:41.823515  111177 scheduler_binder.go:653] PersistentVolume "pv-i-pvc-prebound", Node "node-2" mismatch for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pvc-prebound": No matching NodeSelectorTerms
I1010 18:56:41.823592  111177 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pvc-prebound", node "node-1"
I1010 18:56:41.823623  111177 scheduler_binder.go:267] AssumePodVolumes for pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pvc-prebound", node "node-1": all PVCs bound and nothing to do
I1010 18:56:41.823701  111177 factory.go:710] Attempting to bind pod-i-pvc-prebound to node-1
I1010 18:56:41.826525  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound/binding: (2.460269ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:41.826848  111177 scheduler.go:730] pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-pvc-prebound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I1010 18:56:41.830287  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (2.985804ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:41.830287  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-i-prebound/status: (6.710004ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:41.831094  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound" with version 39819
I1010 18:56:41.831134  111177 pv_controller.go:744] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound" entered phase "Bound"
I1010 18:56:41.831156  111177 pv_controller.go:959] volume "pv-i-pvc-prebound" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound"
I1010 18:56:41.831187  111177 pv_controller.go:960] volume "pv-i-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound (uid: ab221792-fe8a-42ef-85a1-2d8bb221fb43)", boundByController: true
I1010 18:56:41.831207  111177 pv_controller.go:961] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound" status after binding: phase: Bound, bound to: "pv-i-pvc-prebound", bindCompleted: true, boundByController: false
I1010 18:56:41.831263  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound" with version 39819
I1010 18:56:41.831280  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound]: phase: Bound, bound to: "pv-i-pvc-prebound", bindCompleted: true, boundByController: false
I1010 18:56:41.831298  111177 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound]: volume "pv-i-pvc-prebound" found: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound (uid: ab221792-fe8a-42ef-85a1-2d8bb221fb43)", boundByController: true
I1010 18:56:41.831308  111177 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound]: claim is already correctly bound
I1010 18:56:41.831319  111177 pv_controller.go:933] binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound"
I1010 18:56:41.831332  111177 pv_controller.go:831] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound"
I1010 18:56:41.831356  111177 pv_controller.go:843] updating PersistentVolume[pv-i-pvc-prebound]: already bound to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound"
I1010 18:56:41.831368  111177 pv_controller.go:779] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I1010 18:56:41.831378  111177 pv_controller.go:782] updating PersistentVolume[pv-i-pvc-prebound]: phase Bound already set
I1010 18:56:41.831389  111177 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound]: binding to "pv-i-pvc-prebound"
I1010 18:56:41.831409  111177 pv_controller.go:918] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound]: already bound to "pv-i-pvc-prebound"
I1010 18:56:41.831420  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound] status: set phase Bound
I1010 18:56:41.831453  111177 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound] status: phase Bound already set
I1010 18:56:41.831470  111177 pv_controller.go:959] volume "pv-i-pvc-prebound" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound"
I1010 18:56:41.831490  111177 pv_controller.go:960] volume "pv-i-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound (uid: ab221792-fe8a-42ef-85a1-2d8bb221fb43)", boundByController: true
I1010 18:56:41.831501  111177 pv_controller.go:961] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound" status after binding: phase: Bound, bound to: "pv-i-pvc-prebound", bindCompleted: true, boundByController: false
I1010 18:56:41.844599  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-pvc-prebound: (2.866163ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:41.847580  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-i-prebound: (2.041874ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:41.850237  111177 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-pvc-prebound: (2.101473ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:41.858651  111177 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods: (7.648498ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:41.864605  111177 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (5.312431ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:41.865181  111177 pv_controller_base.go:265] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound" deleted
I1010 18:56:41.865235  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 38292
I1010 18:56:41.865281  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound (uid: ab221792-fe8a-42ef-85a1-2d8bb221fb43)", boundByController: true
I1010 18:56:41.865294  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound
I1010 18:56:41.867926  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-i-prebound: (2.339749ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:41.869278  111177 pv_controller.go:549] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound not found
I1010 18:56:41.869416  111177 pv_controller.go:577] volume "pv-i-pvc-prebound" is released and reclaim policy "Retain" will be executed
I1010 18:56:41.869495  111177 pv_controller.go:779] updating PersistentVolume[pv-i-pvc-prebound]: set phase Released
I1010 18:56:41.872008  111177 httplog.go:90] DELETE /api/v1/persistentvolumes: (6.64357ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:41.874087  111177 store.go:365] GuaranteedUpdate of /e363539f-4a5e-4e74-9ddc-5eb895b1e875/persistentvolumes/pv-i-pvc-prebound failed because of a conflict, going to retry
I1010 18:56:41.874278  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (4.155603ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:41.874529  111177 pv_controller.go:792] updating PersistentVolume[pv-i-pvc-prebound]: set phase Released failed: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": StorageError: invalid object, Code: 4, Key: /e363539f-4a5e-4e74-9ddc-5eb895b1e875/persistentvolumes/pv-i-pvc-prebound, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: f3b45311-1fc7-4b1f-8e42-5b8c858e2daf, UID in object meta: 
I1010 18:56:41.874613  111177 pv_controller_base.go:204] could not sync volume "pv-i-pvc-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": StorageError: invalid object, Code: 4, Key: /e363539f-4a5e-4e74-9ddc-5eb895b1e875/persistentvolumes/pv-i-pvc-prebound, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: f3b45311-1fc7-4b1f-8e42-5b8c858e2daf, UID in object meta: 
I1010 18:56:41.874712  111177 pv_controller_base.go:216] volume "pv-i-pvc-prebound" deleted
I1010 18:56:41.874867  111177 pv_controller_base.go:403] deletion of claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-prebound" was already processed
I1010 18:56:41.879317  111177 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (6.826189ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:41.879506  111177 volume_binding_test.go:191] Running test wait cannot bind
I1010 18:56:41.881957  111177 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.740091ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:41.884573  111177 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.874094ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:41.888352  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (3.057137ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:41.889014  111177 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind", version 39829
I1010 18:56:41.889532  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:56:41.889776  111177 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind]: no volume found
I1010 18:56:41.889945  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind] status: set phase Pending
I1010 18:56:41.890080  111177 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind] status: phase Pending already set
I1010 18:56:41.890246  111177 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c", Name:"pvc-w-cannotbind", UID:"67d32f3d-38a8-4d1d-8ebd-b2d28cda7810", APIVersion:"v1", ResourceVersion:"39829", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1010 18:56:41.892690  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (2.137617ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:41.893558  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind
I1010 18:56:41.893596  111177 scheduler.go:598] Attempting to schedule pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind
I1010 18:56:41.893850  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods: (4.526581ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:41.894116  111177 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind", PVC "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind" on node "node-1"
I1010 18:56:41.894152  111177 scheduler_binder.go:725] storage class "wait-fspf" of claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind" does not support dynamic provisioning
I1010 18:56:41.894208  111177 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind", PVC "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind" on node "node-2"
I1010 18:56:41.894236  111177 scheduler_binder.go:725] storage class "wait-fspf" of claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind" does not support dynamic provisioning
I1010 18:56:41.894331  111177 factory.go:645] Unable to schedule volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind: no fit: 0/2 nodes are available: 2 node(s) didn't find available persistent volumes to bind.; waiting
I1010 18:56:41.894497  111177 scheduler.go:746] Updating pod condition for volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind to (PodScheduled==False, Reason=Unschedulable)
I1010 18:56:41.896514  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-cannotbind: (1.50132ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:41.897277  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-cannotbind/status: (2.435158ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:41.898875  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (2.840855ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44566]
I1010 18:56:41.899546  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-cannotbind: (1.436514ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42430]
I1010 18:56:41.899942  111177 generic_scheduler.go:325] Preemption will not help schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind on any node.
I1010 18:56:41.997424  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-cannotbind: (2.57263ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44566]
I1010 18:56:42.000315  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-w-cannotbind: (2.169041ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44566]
I1010 18:56:42.009136  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind
I1010 18:56:42.009279  111177 scheduler.go:594] Skip schedule deleting pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-cannotbind
I1010 18:56:42.010438  111177 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods: (9.434986ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44566]
I1010 18:56:42.012336  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (2.459385ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:42.016857  111177 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (4.660241ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44566]
I1010 18:56:42.017310  111177 pv_controller_base.go:265] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-cannotbind" deleted
I1010 18:56:42.018947  111177 httplog.go:90] DELETE /api/v1/persistentvolumes: (1.425134ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44566]
I1010 18:56:42.027255  111177 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (7.385263ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44566]
I1010 18:56:42.027688  111177 volume_binding_test.go:191] Running test wait pv prebound
I1010 18:56:42.030099  111177 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.981223ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44566]
I1010 18:56:42.032224  111177 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.576072ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44566]
I1010 18:56:42.035085  111177 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-prebound", version 39847
I1010 18:56:42.035135  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-prebound]: phase: Pending, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound (uid: )", boundByController: false
I1010 18:56:42.035145  111177 pv_controller.go:508] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound
I1010 18:56:42.035155  111177 pv_controller.go:779] updating PersistentVolume[pv-w-prebound]: set phase Available
I1010 18:56:42.035155  111177 httplog.go:90] POST /api/v1/persistentvolumes: (2.517353ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44566]
I1010 18:56:42.042420  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (6.911742ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:42.043654  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 39848
I1010 18:56:42.043710  111177 pv_controller.go:800] volume "pv-w-prebound" entered phase "Available"
I1010 18:56:42.043934  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 39848
I1010 18:56:42.043994  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-prebound]: phase: Available, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound (uid: )", boundByController: false
I1010 18:56:42.044003  111177 pv_controller.go:508] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound
I1010 18:56:42.044012  111177 pv_controller.go:779] updating PersistentVolume[pv-w-prebound]: set phase Available
I1010 18:56:42.044022  111177 pv_controller.go:782] updating PersistentVolume[pv-w-prebound]: phase Available already set
I1010 18:56:42.045055  111177 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound", version 39849
I1010 18:56:42.045099  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:56:42.045154  111177 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Available, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound (uid: )", boundByController: false
I1010 18:56:42.045178  111177 pv_controller.go:933] binding volume "pv-w-prebound" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound"
I1010 18:56:42.045192  111177 pv_controller.go:831] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound"
I1010 18:56:42.045229  111177 pv_controller.go:851] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I1010 18:56:42.046281  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (10.293271ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44566]
I1010 18:56:42.049645  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound: (3.80271ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:42.049855  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 39850
I1010 18:56:42.049903  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-prebound]: phase: Available, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound (uid: 074a69b2-1ba0-4058-b2f9-ca540eb74068)", boundByController: false
I1010 18:56:42.049918  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound
I1010 18:56:42.049935  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 39850
I1010 18:56:42.049951  111177 pv_controller.go:557] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:56:42.049952  111177 pv_controller.go:864] updating PersistentVolume[pv-w-prebound]: bound to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound"
I1010 18:56:42.049969  111177 pv_controller.go:608] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I1010 18:56:42.049973  111177 pv_controller.go:779] updating PersistentVolume[pv-w-prebound]: set phase Bound
I1010 18:56:42.051629  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods: (4.304296ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44566]
I1010 18:56:42.052294  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pv-prebound
I1010 18:56:42.052315  111177 scheduler.go:598] Attempting to schedule pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pv-prebound
I1010 18:56:42.052657  111177 scheduler_binder.go:699] Found matching volumes for pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pv-prebound" on node "node-1"
I1010 18:56:42.052659  111177 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pv-prebound", PVC "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound" on node "node-2"
I1010 18:56:42.052696  111177 scheduler_binder.go:725] storage class "wait-xzpx" of claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound" does not support dynamic provisioning
I1010 18:56:42.052904  111177 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pv-prebound", node "node-1"
I1010 18:56:42.053017  111177 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pv-prebound", node "node-1"
I1010 18:56:42.053036  111177 scheduler_binder.go:404] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I1010 18:56:42.053433  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 39852
I1010 18:56:42.053476  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound (uid: 074a69b2-1ba0-4058-b2f9-ca540eb74068)", boundByController: false
I1010 18:56:42.053486  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound
I1010 18:56:42.053501  111177 pv_controller.go:557] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:56:42.053515  111177 pv_controller.go:608] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I1010 18:56:42.053813  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (3.576245ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:42.054185  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 39852
I1010 18:56:42.054218  111177 pv_controller.go:800] volume "pv-w-prebound" entered phase "Bound"
I1010 18:56:42.054235  111177 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I1010 18:56:42.054302  111177 pv_controller.go:903] volume "pv-w-prebound" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound"
I1010 18:56:42.058109  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-w-pv-prebound: (3.526852ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:42.058489  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound" with version 39853
I1010 18:56:42.058523  111177 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound]: bound to "pv-w-prebound"
I1010 18:56:42.058538  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound] status: set phase Bound
I1010 18:56:42.064852  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-w-pv-prebound/status: (5.908016ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:42.065355  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound" with version 39854
I1010 18:56:42.065408  111177 pv_controller.go:744] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound" entered phase "Bound"
I1010 18:56:42.065429  111177 pv_controller.go:959] volume "pv-w-prebound" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound"
I1010 18:56:42.065467  111177 pv_controller.go:960] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound (uid: 074a69b2-1ba0-4058-b2f9-ca540eb74068)", boundByController: false
I1010 18:56:42.065493  111177 pv_controller.go:961] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I1010 18:56:42.065549  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound" with version 39854
I1010 18:56:42.065566  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound]: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I1010 18:56:42.065588  111177 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound (uid: 074a69b2-1ba0-4058-b2f9-ca540eb74068)", boundByController: false
I1010 18:56:42.065610  111177 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound]: claim is already correctly bound
I1010 18:56:42.065621  111177 pv_controller.go:933] binding volume "pv-w-prebound" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound"
I1010 18:56:42.065634  111177 pv_controller.go:831] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound"
I1010 18:56:42.065685  111177 pv_controller.go:843] updating PersistentVolume[pv-w-prebound]: already bound to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound"
I1010 18:56:42.065705  111177 pv_controller.go:779] updating PersistentVolume[pv-w-prebound]: set phase Bound
I1010 18:56:42.065715  111177 pv_controller.go:782] updating PersistentVolume[pv-w-prebound]: phase Bound already set
I1010 18:56:42.065792  111177 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I1010 18:56:42.065829  111177 pv_controller.go:918] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound]: already bound to "pv-w-prebound"
I1010 18:56:42.065840  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound] status: set phase Bound
I1010 18:56:42.065892  111177 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound] status: phase Bound already set
I1010 18:56:42.065913  111177 pv_controller.go:959] volume "pv-w-prebound" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound"
I1010 18:56:42.065946  111177 pv_controller.go:960] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound (uid: 074a69b2-1ba0-4058-b2f9-ca540eb74068)", boundByController: false
I1010 18:56:42.065963  111177 pv_controller.go:961] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I1010 18:56:42.069280  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound: (15.913873ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44566]
I1010 18:56:42.070260  111177 scheduler_binder.go:407] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 18:56:42.070304  111177 scheduler_assume_cache.go:337] Restored v1.PersistentVolume "pv-w-prebound"
I1010 18:56:42.070344  111177 scheduler.go:498] Failed to bind volumes for pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pv-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
E1010 18:56:42.070383  111177 factory.go:661] Error scheduling volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pv-prebound: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again; retrying
I1010 18:56:42.070430  111177 scheduler.go:746] Updating pod condition for volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pv-prebound to (PodScheduled==False, Reason=VolumeBindingFailed)
I1010 18:56:42.075711  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-pv-prebound: (4.12242ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:42.076167  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-pv-prebound/status: (5.187936ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44566]
E1010 18:56:42.076673  111177 scheduler.go:674] error binding volumes: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 18:56:42.076887  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pv-prebound
I1010 18:56:42.076915  111177 scheduler.go:598] Attempting to schedule pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pv-prebound
I1010 18:56:42.077153  111177 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pv-prebound" match with Node "node-1"
I1010 18:56:42.077291  111177 scheduler_binder.go:653] PersistentVolume "pv-w-prebound", Node "node-2" mismatch for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pv-prebound": No matching NodeSelectorTerms
I1010 18:56:42.077410  111177 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pv-prebound", node "node-1"
I1010 18:56:42.077505  111177 scheduler_binder.go:267] AssumePodVolumes for pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pv-prebound", node "node-1": all PVCs bound and nothing to do
I1010 18:56:42.077602  111177 factory.go:710] Attempting to bind pod-w-pv-prebound to node-1
I1010 18:56:42.080713  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-pv-prebound/binding: (2.716248ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44566]
I1010 18:56:42.081184  111177 scheduler.go:730] pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-pv-prebound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I1010 18:56:42.081691  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (9.09116ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44582]
I1010 18:56:42.083558  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (2.001218ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44566]
I1010 18:56:42.154636  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-pv-prebound: (2.029922ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44582]
I1010 18:56:42.157798  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-w-pv-prebound: (2.367833ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44582]
I1010 18:56:42.160476  111177 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-prebound: (2.127366ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44582]
I1010 18:56:42.175453  111177 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods: (14.132946ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44582]
I1010 18:56:42.183305  111177 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (7.226681ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44582]
I1010 18:56:42.183476  111177 pv_controller_base.go:265] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound" deleted
I1010 18:56:42.183565  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 39852
I1010 18:56:42.183611  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound (uid: 074a69b2-1ba0-4058-b2f9-ca540eb74068)", boundByController: false
I1010 18:56:42.183633  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound
I1010 18:56:42.183654  111177 pv_controller.go:549] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound not found
I1010 18:56:42.183668  111177 pv_controller.go:577] volume "pv-w-prebound" is released and reclaim policy "Retain" will be executed
I1010 18:56:42.183681  111177 pv_controller.go:779] updating PersistentVolume[pv-w-prebound]: set phase Released
I1010 18:56:42.186853  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (2.794397ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:42.187902  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 39904
I1010 18:56:42.188060  111177 pv_controller.go:800] volume "pv-w-prebound" entered phase "Released"
I1010 18:56:42.188125  111177 pv_controller.go:1013] reclaimVolume[pv-w-prebound]: policy is Retain, nothing to do
I1010 18:56:42.188231  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 39904
I1010 18:56:42.188329  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-prebound]: phase: Released, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound (uid: 074a69b2-1ba0-4058-b2f9-ca540eb74068)", boundByController: false
I1010 18:56:42.188420  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound
I1010 18:56:42.188513  111177 pv_controller.go:549] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound not found
I1010 18:56:42.188590  111177 pv_controller.go:1013] reclaimVolume[pv-w-prebound]: policy is Retain, nothing to do
I1010 18:56:42.190150  111177 httplog.go:90] DELETE /api/v1/persistentvolumes: (6.101326ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44582]
I1010 18:56:42.190476  111177 pv_controller_base.go:216] volume "pv-w-prebound" deleted
I1010 18:56:42.190532  111177 pv_controller_base.go:403] deletion of claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-pv-prebound" was already processed
I1010 18:56:42.199187  111177 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (8.290599ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44582]
I1010 18:56:42.199512  111177 volume_binding_test.go:191] Running test wait can bind two
I1010 18:56:42.201443  111177 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.637935ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44582]
I1010 18:56:42.204107  111177 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.205471ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44582]
I1010 18:56:42.207166  111177 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind-2", version 39915
I1010 18:56:42.207210  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Pending, bound to: "", boundByController: false
I1010 18:56:42.207231  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind-2]: volume is unused
I1010 18:56:42.207239  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-2]: set phase Available
I1010 18:56:42.207252  111177 httplog.go:90] POST /api/v1/persistentvolumes: (2.705569ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44582]
I1010 18:56:42.209489  111177 httplog.go:90] POST /api/v1/persistentvolumes: (1.947972ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44582]
I1010 18:56:42.209851  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-2/status: (2.385084ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:42.211748  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 39916
I1010 18:56:42.211779  111177 pv_controller.go:800] volume "pv-w-canbind-2" entered phase "Available"
I1010 18:56:42.211811  111177 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind-3", version 39917
I1010 18:56:42.211828  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Pending, bound to: "", boundByController: false
I1010 18:56:42.211848  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind-3]: volume is unused
I1010 18:56:42.211854  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-3]: set phase Available
I1010 18:56:42.214356  111177 httplog.go:90] POST /api/v1/persistentvolumes: (1.999643ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44582]
I1010 18:56:42.215069  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-3/status: (2.674902ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:42.215826  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 39922
I1010 18:56:42.215861  111177 pv_controller.go:800] volume "pv-w-canbind-3" entered phase "Available"
I1010 18:56:42.215890  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 39916
I1010 18:56:42.215908  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Available, bound to: "", boundByController: false
I1010 18:56:42.215929  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind-2]: volume is unused
I1010 18:56:42.215937  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-2]: set phase Available
I1010 18:56:42.215945  111177 pv_controller.go:782] updating PersistentVolume[pv-w-canbind-2]: phase Available already set
I1010 18:56:42.215962  111177 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind-5", version 39920
I1010 18:56:42.215975  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-5]: phase: Pending, bound to: "", boundByController: false
I1010 18:56:42.215995  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind-5]: volume is unused
I1010 18:56:42.216000  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-5]: set phase Available
I1010 18:56:42.217252  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (2.33503ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44582]
I1010 18:56:42.218147  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-5/status: (1.946432ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:42.218831  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-5" with version 39925
I1010 18:56:42.218853  111177 pv_controller.go:800] volume "pv-w-canbind-5" entered phase "Available"
I1010 18:56:42.218876  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 39922
I1010 18:56:42.218890  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Available, bound to: "", boundByController: false
I1010 18:56:42.218905  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind-3]: volume is unused
I1010 18:56:42.218910  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-3]: set phase Available
I1010 18:56:42.218916  111177 pv_controller.go:782] updating PersistentVolume[pv-w-canbind-3]: phase Available already set
I1010 18:56:42.218926  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-5" with version 39925
I1010 18:56:42.218934  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-5]: phase: Available, bound to: "", boundByController: false
I1010 18:56:42.218947  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind-5]: volume is unused
I1010 18:56:42.218952  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-5]: set phase Available
I1010 18:56:42.218961  111177 pv_controller.go:782] updating PersistentVolume[pv-w-canbind-5]: phase Available already set
I1010 18:56:42.219061  111177 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2", version 39923
I1010 18:56:42.219089  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:56:42.219130  111177 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2]: volume "pv-w-canbind-5" found: phase: Available, bound to: "", boundByController: false
I1010 18:56:42.219187  111177 pv_controller.go:933] binding volume "pv-w-canbind-5" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2"
I1010 18:56:42.219246  111177 pv_controller.go:831] updating PersistentVolume[pv-w-canbind-5]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2"
I1010 18:56:42.219354  111177 pv_controller.go:851] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2" bound to volume "pv-w-canbind-5"
I1010 18:56:42.220519  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (2.564476ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44582]
I1010 18:56:42.224042  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-5" with version 39932
I1010 18:56:42.224112  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-5]: phase: Available, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2 (uid: 190ef258-649e-4a72-b31a-f98bab86c0c1)", boundByController: true
I1010 18:56:42.224126  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind-5]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2
I1010 18:56:42.224148  111177 pv_controller.go:557] synchronizing PersistentVolume[pv-w-canbind-5]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:56:42.224166  111177 pv_controller.go:605] synchronizing PersistentVolume[pv-w-canbind-5]: volume not bound yet, waiting for syncClaim to fix it
I1010 18:56:42.224784  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods: (3.65301ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44582]
I1010 18:56:42.225399  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind-2
I1010 18:56:42.225423  111177 scheduler.go:598] Attempting to schedule pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind-2
I1010 18:56:42.225719  111177 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind-2", PVC "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-3" on node "node-1"
I1010 18:56:42.225781  111177 scheduler_binder.go:725] storage class "wait-fb2f" of claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-3" does not support dynamic provisioning
I1010 18:56:42.225880  111177 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind-2", PVC "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2" on node "node-2"
I1010 18:56:42.225916  111177 scheduler_binder.go:725] storage class "wait-fb2f" of claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2" does not support dynamic provisioning
I1010 18:56:42.225966  111177 factory.go:645] Unable to schedule volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind-2: no fit: 0/2 nodes are available: 2 node(s) didn't find available persistent volumes to bind.; waiting
I1010 18:56:42.226009  111177 scheduler.go:746] Updating pod condition for volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind-2 to (PodScheduled==False, Reason=Unschedulable)
I1010 18:56:42.226483  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-5: (6.818949ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:42.226722  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-5" with version 39932
I1010 18:56:42.226773  111177 pv_controller.go:864] updating PersistentVolume[pv-w-canbind-5]: bound to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2"
I1010 18:56:42.226785  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-5]: set phase Bound
I1010 18:56:42.229342  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2/status: (2.964353ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44582]
I1010 18:56:42.229410  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.072897ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:42.233016  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-5" with version 39938
I1010 18:56:42.233077  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-5]: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2 (uid: 190ef258-649e-4a72-b31a-f98bab86c0c1)", boundByController: true
I1010 18:56:42.233091  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind-5]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2
I1010 18:56:42.233113  111177 pv_controller.go:557] synchronizing PersistentVolume[pv-w-canbind-5]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:56:42.233129  111177 pv_controller.go:605] synchronizing PersistentVolume[pv-w-canbind-5]: volume not bound yet, waiting for syncClaim to fix it
I1010 18:56:42.233394  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-5/status: (5.887296ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44588]
I1010 18:56:42.233615  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (6.294835ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44590]
I1010 18:56:42.233954  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-5" with version 39938
I1010 18:56:42.233999  111177 pv_controller.go:800] volume "pv-w-canbind-5" entered phase "Bound"
I1010 18:56:42.234018  111177 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2]: binding to "pv-w-canbind-5"
I1010 18:56:42.234039  111177 pv_controller.go:903] volume "pv-w-canbind-5" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2"
I1010 18:56:42.234677  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (4.713797ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:42.235167  111177 generic_scheduler.go:325] Preemption will not help schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind-2 on any node.
I1010 18:56:42.237517  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-w-canbind-2: (3.057901ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44592]
I1010 18:56:42.238089  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2" with version 39942
I1010 18:56:42.238118  111177 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2]: bound to "pv-w-canbind-5"
I1010 18:56:42.238130  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2] status: set phase Bound
I1010 18:56:42.242755  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-w-canbind-2/status: (4.192623ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:42.243365  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2" with version 39947
I1010 18:56:42.243406  111177 pv_controller.go:744] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2" entered phase "Bound"
I1010 18:56:42.243427  111177 pv_controller.go:959] volume "pv-w-canbind-5" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2"
I1010 18:56:42.243454  111177 pv_controller.go:960] volume "pv-w-canbind-5" status after binding: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2 (uid: 190ef258-649e-4a72-b31a-f98bab86c0c1)", boundByController: true
I1010 18:56:42.243473  111177 pv_controller.go:961] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2" status after binding: phase: Bound, bound to: "pv-w-canbind-5", bindCompleted: true, boundByController: true
I1010 18:56:42.243521  111177 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-3", version 39928
I1010 18:56:42.243549  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-3]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:56:42.243593  111177 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-3]: no volume found
I1010 18:56:42.243619  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-3] status: set phase Pending
I1010 18:56:42.243636  111177 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-3] status: phase Pending already set
I1010 18:56:42.243660  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2" with version 39947
I1010 18:56:42.243675  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2]: phase: Bound, bound to: "pv-w-canbind-5", bindCompleted: true, boundByController: true
I1010 18:56:42.243693  111177 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2]: volume "pv-w-canbind-5" found: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2 (uid: 190ef258-649e-4a72-b31a-f98bab86c0c1)", boundByController: true
I1010 18:56:42.243769  111177 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2]: claim is already correctly bound
I1010 18:56:42.243781  111177 pv_controller.go:933] binding volume "pv-w-canbind-5" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2"
I1010 18:56:42.243793  111177 pv_controller.go:831] updating PersistentVolume[pv-w-canbind-5]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2"
I1010 18:56:42.243815  111177 pv_controller.go:843] updating PersistentVolume[pv-w-canbind-5]: already bound to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2"
I1010 18:56:42.243828  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-5]: set phase Bound
I1010 18:56:42.243837  111177 pv_controller.go:782] updating PersistentVolume[pv-w-canbind-5]: phase Bound already set
I1010 18:56:42.243847  111177 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2]: binding to "pv-w-canbind-5"
I1010 18:56:42.243869  111177 pv_controller.go:918] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2]: already bound to "pv-w-canbind-5"
I1010 18:56:42.243880  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2] status: set phase Bound
I1010 18:56:42.243899  111177 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2] status: phase Bound already set
I1010 18:56:42.243910  111177 pv_controller.go:959] volume "pv-w-canbind-5" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2"
I1010 18:56:42.243930  111177 pv_controller.go:960] volume "pv-w-canbind-5" status after binding: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2 (uid: 190ef258-649e-4a72-b31a-f98bab86c0c1)", boundByController: true
I1010 18:56:42.243947  111177 pv_controller.go:961] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2" status after binding: phase: Bound, bound to: "pv-w-canbind-5", bindCompleted: true, boundByController: true
I1010 18:56:42.244271  111177 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c", Name:"pvc-w-canbind-3", UID:"7f79556b-67c8-4896-9fbf-67dcaa37e1e3", APIVersion:"v1", ResourceVersion:"39928", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1010 18:56:42.248767  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (3.248165ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:42.328408  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.3192ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:42.428387  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.54259ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:42.528306  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.383231ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:42.627912  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.163678ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:42.728856  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.871572ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:42.829565  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.070618ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:42.929547  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.233992ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:43.030087  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.812429ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:43.128639  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.785495ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:43.228539  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.6286ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:43.329016  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.994302ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:43.429042  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.89869ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:43.529538  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.595473ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:43.617426  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind-2
I1010 18:56:43.617487  111177 scheduler.go:598] Attempting to schedule pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind-2
I1010 18:56:43.617908  111177 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind-2" match with Node "node-1"
I1010 18:56:43.617930  111177 scheduler_binder.go:653] PersistentVolume "pv-w-canbind-5", Node "node-2" mismatch for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind-2": No matching NodeSelectorTerms
I1010 18:56:43.617994  111177 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind-2", PVC "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-3" on node "node-1"
I1010 18:56:43.618002  111177 scheduler_binder.go:699] Found matching volumes for pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind-2" on node "node-2"
I1010 18:56:43.618014  111177 scheduler_binder.go:725] storage class "wait-fb2f" of claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-3" does not support dynamic provisioning
I1010 18:56:43.618096  111177 factory.go:645] Unable to schedule volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind-2: no fit: 0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had volume node affinity conflict.; waiting
I1010 18:56:43.618156  111177 scheduler.go:746] Updating pod condition for volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind-2 to (PodScheduled==False, Reason=Unschedulable)
I1010 18:56:43.620672  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.045369ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:43.621618  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (2.365189ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:43.623104  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2/status: (4.46399ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44582]
I1010 18:56:43.625014  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (1.373049ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:43.625294  111177 generic_scheduler.go:325] Preemption will not help schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind-2 on any node.
I1010 18:56:43.627183  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (1.520597ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:43.727835  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (1.95777ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:43.828050  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.181206ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:43.928463  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.436315ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:44.028858  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.786732ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:44.128521  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.60893ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:44.228529  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.571992ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:44.328040  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.060014ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:44.428302  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.410973ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:44.528018  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.127331ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:44.629235  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.985828ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:44.728318  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.407461ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:44.828298  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.163709ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:44.928885  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.858209ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:45.029101  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.080898ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:45.128330  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.381578ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:45.228442  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.578819ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:45.328744  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.753768ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:45.428684  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.580629ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:45.528585  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.641847ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:45.628715  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.714151ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:45.728869  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.874251ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:45.828655  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.651524ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:45.929231  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.391785ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:46.027926  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.036994ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:46.127817  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.115769ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:46.227954  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (1.990204ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:46.328926  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.944353ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:46.428423  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.422914ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:46.528623  111177 httplog.go:90] GET /api/v1/namespaces/default: (3.40275ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:46.529103  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.690168ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:46.530555  111177 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.448114ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:46.532589  111177 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.434737ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:46.628542  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.626569ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:46.728701  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.512793ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:46.828855  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.884279ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:46.928991  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.914284ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:47.028480  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.523204ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:47.128159  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.292158ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:47.230449  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (1.96579ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:47.328045  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.125633ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:47.428609  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.573763ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:47.529042  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.757339ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:47.628576  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.610737ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:47.737988  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (10.039421ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:47.828591  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.661928ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:47.929822  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.932329ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:48.028717  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.779251ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:48.128922  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.058445ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:48.228477  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.469156ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:48.328624  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.571184ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:48.428259  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.355177ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:48.528813  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.810109ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:48.628249  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.530452ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:48.728494  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.566431ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:48.828843  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.904056ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:48.929441  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.323935ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:49.029673  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.711796ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:49.131359  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (5.552343ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:49.228960  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.012449ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:49.328480  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.500123ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:49.429358  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.302459ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:49.530799  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (4.755735ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:49.629046  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.011714ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:49.732177  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (4.093313ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:49.828305  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.345878ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:49.928302  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.499038ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:50.028163  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.293381ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:50.128336  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.525191ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:50.229045  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.007065ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:50.330095  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.847793ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:50.430875  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (4.809055ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:50.529228  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.166541ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:50.628754  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.832893ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:50.728061  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.137325ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:50.830248  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (4.248706ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:50.929069  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.053925ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:51.028799  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.876621ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:51.128432  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.412943ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:51.229712  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.759066ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:51.328809  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.556222ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:51.428159  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.309965ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:51.528349  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.346507ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:51.628606  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.684317ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:51.728677  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.56926ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:51.828622  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.781488ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:51.928619  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.732083ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:52.029362  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.281811ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:52.128675  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.766198ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:52.229342  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.203532ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:52.329113  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.057009ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:52.429051  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.008022ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:52.528859  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.837462ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:52.628290  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.355149ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:52.728299  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.387325ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:52.828139  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.334857ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:52.928436  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.446007ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:53.028881  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.792326ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:53.128796  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.878665ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:53.228421  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.541199ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:53.241175  111177 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.648642ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:53.244310  111177 httplog.go:90] GET /api/v1/namespaces/kube-public: (2.404118ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:53.246963  111177 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (2.054639ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:53.328805  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.475328ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:53.428175  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.266122ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:53.528379  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.405966ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:53.632463  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (6.400459ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:53.730912  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (4.839845ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:53.829660  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.866904ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:53.928663  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.460047ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:54.028424  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.463908ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:54.128693  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.75276ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:54.228577  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.662096ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:54.328616  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.667139ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:54.428408  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.366274ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:54.528538  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.689773ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:54.629843  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.869482ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:54.728462  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.574596ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:54.828329  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.386127ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:54.928690  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.793301ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:55.028926  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.005418ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:55.128911  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.977914ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:55.228255  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.276802ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:55.328415  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.523841ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:55.428661  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.77564ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:55.528285  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.278517ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:55.627970  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.266173ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:55.727907  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.010014ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:55.830060  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (4.034182ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:55.928568  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.617967ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:56.028447  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.490363ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:56.129129  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.166868ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:56.227590  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (1.765416ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:56.328150  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.189314ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:56.429272  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.095198ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:56.527297  111177 httplog.go:90] GET /api/v1/namespaces/default: (1.987641ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:56.528826  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.610101ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:56:56.529647  111177 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.369767ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:56.531345  111177 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.242916ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:56.628208  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.284883ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:56.728144  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.21354ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:56.819244  111177 pv_controller_base.go:426] resyncing PV controller
I1010 18:56:56.819405  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 39916
I1010 18:56:56.819453  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Available, bound to: "", boundByController: false
I1010 18:56:56.819480  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind-2]: volume is unused
I1010 18:56:56.819490  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-2]: set phase Available
I1010 18:56:56.819500  111177 pv_controller.go:782] updating PersistentVolume[pv-w-canbind-2]: phase Available already set
I1010 18:56:56.819518  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 39922
I1010 18:56:56.819533  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Available, bound to: "", boundByController: false
I1010 18:56:56.819550  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind-3]: volume is unused
I1010 18:56:56.819555  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-3]: set phase Available
I1010 18:56:56.819563  111177 pv_controller.go:782] updating PersistentVolume[pv-w-canbind-3]: phase Available already set
I1010 18:56:56.819577  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-5" with version 39938
I1010 18:56:56.819605  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-5]: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2 (uid: 190ef258-649e-4a72-b31a-f98bab86c0c1)", boundByController: true
I1010 18:56:56.819623  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind-5]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2
I1010 18:56:56.819655  111177 pv_controller.go:557] synchronizing PersistentVolume[pv-w-canbind-5]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2 found: phase: Bound, bound to: "pv-w-canbind-5", bindCompleted: true, boundByController: true
I1010 18:56:56.819676  111177 pv_controller.go:621] synchronizing PersistentVolume[pv-w-canbind-5]: all is bound
I1010 18:56:56.819687  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-5]: set phase Bound
I1010 18:56:56.819694  111177 pv_controller.go:782] updating PersistentVolume[pv-w-canbind-5]: phase Bound already set
I1010 18:56:56.819755  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-3" with version 39928
I1010 18:56:56.819777  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-3]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:56:56.819833  111177 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-3]: no volume found
I1010 18:56:56.819886  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-3] status: set phase Pending
I1010 18:56:56.819912  111177 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-3] status: phase Pending already set
I1010 18:56:56.819930  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2" with version 39947
I1010 18:56:56.819940  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2]: phase: Bound, bound to: "pv-w-canbind-5", bindCompleted: true, boundByController: true
I1010 18:56:56.819968  111177 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2]: volume "pv-w-canbind-5" found: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2 (uid: 190ef258-649e-4a72-b31a-f98bab86c0c1)", boundByController: true
I1010 18:56:56.819980  111177 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2]: claim is already correctly bound
I1010 18:56:56.819991  111177 pv_controller.go:933] binding volume "pv-w-canbind-5" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2"
I1010 18:56:56.820002  111177 pv_controller.go:831] updating PersistentVolume[pv-w-canbind-5]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2"
I1010 18:56:56.820021  111177 pv_controller.go:843] updating PersistentVolume[pv-w-canbind-5]: already bound to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2"
I1010 18:56:56.820029  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-5]: set phase Bound
I1010 18:56:56.820037  111177 pv_controller.go:782] updating PersistentVolume[pv-w-canbind-5]: phase Bound already set
I1010 18:56:56.820047  111177 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2]: binding to "pv-w-canbind-5"
I1010 18:56:56.820069  111177 pv_controller.go:918] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2]: already bound to "pv-w-canbind-5"
I1010 18:56:56.820088  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2] status: set phase Bound
I1010 18:56:56.820115  111177 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2] status: phase Bound already set
I1010 18:56:56.820127  111177 pv_controller.go:959] volume "pv-w-canbind-5" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2"
I1010 18:56:56.820141  111177 pv_controller.go:960] volume "pv-w-canbind-5" status after binding: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2 (uid: 190ef258-649e-4a72-b31a-f98bab86c0c1)", boundByController: true
I1010 18:56:56.820151  111177 pv_controller.go:961] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2" status after binding: phase: Bound, bound to: "pv-w-canbind-5", bindCompleted: true, boundByController: true
I1010 18:56:56.820672  111177 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c", Name:"pvc-w-canbind-3", UID:"7f79556b-67c8-4896-9fbf-67dcaa37e1e3", APIVersion:"v1", ResourceVersion:"39928", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1010 18:56:56.824695  111177 httplog.go:90] PATCH /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events/pvc-w-canbind-3.15cc5e122f6dccaa: (3.870186ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:56.827293  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (1.711926ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:56.928021  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.128764ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:57.031481  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (5.424937ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:57.127979  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.03355ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:57.233014  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (6.936486ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:57.328525  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.352905ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:57.429099  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.835933ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:57.528062  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.068524ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:57.628904  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.926124ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:57.729240  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.15688ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:57.828443  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.407574ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:57.928916  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.016366ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:58.028689  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.701083ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:58.127830  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (1.904548ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:58.228242  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.363648ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:58.327954  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.151147ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:58.429184  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.990343ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:58.529308  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.081095ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:58.628754  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.885017ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:58.727962  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.113597ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:58.827595  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (1.766123ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:58.928187  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.258298ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:59.029183  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.07277ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:59.128791  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.799488ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:59.231139  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.296254ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:59.328335  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.506436ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:59.427504  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (1.697144ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:59.528657  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.765307ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:59.629298  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.387809ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:59.728585  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.545795ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:59.829070  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.16021ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:56:59.928412  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.521872ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:00.029582  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.557829ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:00.128803  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.758511ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:00.228308  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.413411ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:00.328141  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.154757ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:00.428453  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.427323ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:00.528776  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.634723ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:00.628393  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.399775ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:00.728190  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.273614ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:00.828670  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.627571ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:00.928710  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.574609ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:01.027966  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.100517ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:01.128118  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.222557ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:01.228179  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.326905ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:01.328544  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.564749ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:01.428494  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.442093ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:01.529349  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.800494ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:01.629254  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.241927ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:01.727673  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (1.928351ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:01.828254  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.048722ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:01.928030  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.110797ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:02.028124  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.141796ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:02.129959  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (4.122741ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:02.228342  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.375734ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:02.328633  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.538799ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:02.428600  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.514141ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:02.528036  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.104484ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:02.629117  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.085433ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:02.728572  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.492333ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:02.829671  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.409715ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:02.927982  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.127869ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:03.028310  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.47522ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:03.127638  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (1.906233ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:03.228110  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.168309ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:03.328519  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.496802ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:03.428543  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.552745ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:03.528057  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.073151ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:03.627563  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (1.719467ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:03.728119  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.218138ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:03.829616  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.583432ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:03.927938  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (1.99432ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:04.027794  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (1.949006ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:04.128511  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.608742ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:04.228203  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.293921ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:04.328129  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.198112ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:04.428265  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.10792ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:04.528862  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.829961ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:04.628417  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.330986ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:04.728111  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.250608ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:04.828615  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.660913ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:04.928704  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.803239ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:05.028557  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.570394ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:05.128175  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.265539ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:05.228496  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.537464ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:05.328586  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.582326ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:05.428622  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.731257ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:05.529620  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.663787ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:05.628151  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.245155ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:05.728528  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.611242ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:05.828536  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.656432ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:05.928067  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.23046ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:06.027890  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (1.990298ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:06.129098  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.141903ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:06.228874  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.678641ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:06.329218  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.026849ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:06.431585  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (5.704583ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:06.528710  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.314823ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:57:06.529454  111177 httplog.go:90] GET /api/v1/namespaces/default: (4.031529ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:06.532711  111177 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.803593ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:06.534695  111177 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.347223ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:06.628056  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.240518ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:06.729339  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.358786ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:06.829158  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.567723ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:06.928293  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.371752ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:07.028583  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.570897ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:07.128409  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.549769ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:07.228574  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.570234ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:07.327989  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.120147ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:07.427986  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.113432ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:07.528173  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.253213ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:07.628517  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.370388ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:07.728339  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.313998ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:07.828689  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.755336ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:07.928689  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.505581ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:08.027572  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (1.751337ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:08.128041  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.052235ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:08.227770  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (1.881888ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:08.328170  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.231851ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:08.428344  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.373656ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:08.528508  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.244916ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:08.627991  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.119257ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:08.728239  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.226738ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:08.828176  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.278971ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:08.928149  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.093311ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:09.032225  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (4.117499ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:09.128583  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.679076ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:09.227835  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (1.952723ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:09.327776  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (1.806566ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:09.429296  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.215299ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:09.528880  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.687389ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:09.628323  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.331303ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:09.728117  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.272694ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:09.828239  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.171498ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:09.928561  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.655941ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:10.032594  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (5.560169ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:10.129655  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.61853ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:10.228878  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.820171ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:10.328658  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.658263ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:10.428173  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.253808ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:10.528626  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.574899ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:10.629721  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.366909ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:10.728321  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.281877ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:10.828234  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.332925ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:10.928252  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.32044ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:11.028565  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.55781ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:11.129173  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.191845ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:11.228479  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.642692ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:11.335213  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.502475ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:11.428469  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.508317ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:11.528678  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.318654ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:11.629071  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.682149ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:11.729020  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.065597ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:11.819557  111177 pv_controller_base.go:426] resyncing PV controller
I1010 18:57:11.819712  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 39916
I1010 18:57:11.819759  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2" with version 39947
I1010 18:57:11.819796  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Available, bound to: "", boundByController: false
I1010 18:57:11.819810  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2]: phase: Bound, bound to: "pv-w-canbind-5", bindCompleted: true, boundByController: true
I1010 18:57:11.819828  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind-2]: volume is unused
I1010 18:57:11.819847  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-2]: set phase Available
I1010 18:57:11.819856  111177 pv_controller.go:782] updating PersistentVolume[pv-w-canbind-2]: phase Available already set
I1010 18:57:11.819866  111177 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2]: volume "pv-w-canbind-5" found: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2 (uid: 190ef258-649e-4a72-b31a-f98bab86c0c1)", boundByController: true
I1010 18:57:11.819876  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 39922
I1010 18:57:11.819883  111177 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2]: claim is already correctly bound
I1010 18:57:11.819890  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Available, bound to: "", boundByController: false
I1010 18:57:11.819896  111177 pv_controller.go:933] binding volume "pv-w-canbind-5" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2"
I1010 18:57:11.819911  111177 pv_controller.go:831] updating PersistentVolume[pv-w-canbind-5]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2"
I1010 18:57:11.819911  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind-3]: volume is unused
I1010 18:57:11.819924  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-3]: set phase Available
I1010 18:57:11.819930  111177 pv_controller.go:782] updating PersistentVolume[pv-w-canbind-3]: phase Available already set
I1010 18:57:11.819939  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-5" with version 39938
I1010 18:57:11.819945  111177 pv_controller.go:843] updating PersistentVolume[pv-w-canbind-5]: already bound to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2"
I1010 18:57:11.819957  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-5]: set phase Bound
I1010 18:57:11.819957  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-5]: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2 (uid: 190ef258-649e-4a72-b31a-f98bab86c0c1)", boundByController: true
I1010 18:57:11.819968  111177 pv_controller.go:782] updating PersistentVolume[pv-w-canbind-5]: phase Bound already set
I1010 18:57:11.819971  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind-5]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2
I1010 18:57:11.819980  111177 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2]: binding to "pv-w-canbind-5"
I1010 18:57:11.819987  111177 pv_controller.go:557] synchronizing PersistentVolume[pv-w-canbind-5]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2 found: phase: Bound, bound to: "pv-w-canbind-5", bindCompleted: true, boundByController: true
I1010 18:57:11.819996  111177 pv_controller.go:621] synchronizing PersistentVolume[pv-w-canbind-5]: all is bound
I1010 18:57:11.819999  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-5]: set phase Bound
I1010 18:57:11.820004  111177 pv_controller.go:918] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2]: already bound to "pv-w-canbind-5"
I1010 18:57:11.820005  111177 pv_controller.go:782] updating PersistentVolume[pv-w-canbind-5]: phase Bound already set
I1010 18:57:11.820017  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2] status: set phase Bound
I1010 18:57:11.820047  111177 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2] status: phase Bound already set
I1010 18:57:11.820060  111177 pv_controller.go:959] volume "pv-w-canbind-5" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2"
I1010 18:57:11.820081  111177 pv_controller.go:960] volume "pv-w-canbind-5" status after binding: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2 (uid: 190ef258-649e-4a72-b31a-f98bab86c0c1)", boundByController: true
I1010 18:57:11.820096  111177 pv_controller.go:961] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2" status after binding: phase: Bound, bound to: "pv-w-canbind-5", bindCompleted: true, boundByController: true
I1010 18:57:11.820124  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-3" with version 39928
I1010 18:57:11.820138  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-3]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:57:11.820214  111177 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-3]: no volume found
I1010 18:57:11.820261  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-3] status: set phase Pending
I1010 18:57:11.820279  111177 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-3] status: phase Pending already set
I1010 18:57:11.820408  111177 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c", Name:"pvc-w-canbind-3", UID:"7f79556b-67c8-4896-9fbf-67dcaa37e1e3", APIVersion:"v1", ResourceVersion:"39928", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1010 18:57:11.824152  111177 httplog.go:90] PATCH /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events/pvc-w-canbind-3.15cc5e122f6dccaa: (3.137303ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:11.827471  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (1.832393ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:11.927837  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (1.915112ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:12.029148  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.93486ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:12.128825  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.806961ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:12.229278  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (3.201598ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:12.232596  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-w-canbind-2: (2.564779ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:12.235654  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-w-canbind-2: (2.279828ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:12.238213  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-w-canbind-3: (1.915598ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:12.240790  111177 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind-2: (1.733642ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:12.243002  111177 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind-3: (1.664716ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:12.244921  111177 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind-5: (1.42847ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:12.251056  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind-2
I1010 18:57:12.251118  111177 scheduler.go:594] Skip schedule deleting pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind-2
I1010 18:57:12.253064  111177 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods: (7.742189ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:12.253768  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (2.278427ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:57:12.258776  111177 pv_controller_base.go:265] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2" deleted
I1010 18:57:12.258827  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-5" with version 39938
I1010 18:57:12.258866  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-5]: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2 (uid: 190ef258-649e-4a72-b31a-f98bab86c0c1)", boundByController: true
I1010 18:57:12.258877  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind-5]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2
I1010 18:57:12.260379  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-w-canbind-2: (1.193245ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:57:12.260801  111177 pv_controller.go:549] synchronizing PersistentVolume[pv-w-canbind-5]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2 not found
I1010 18:57:12.260848  111177 pv_controller.go:577] volume "pv-w-canbind-5" is released and reclaim policy "Retain" will be executed
I1010 18:57:12.260860  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-5]: set phase Released
I1010 18:57:12.261897  111177 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (8.271211ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:12.262143  111177 pv_controller_base.go:265] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-3" deleted
I1010 18:57:12.266486  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-5/status: (3.119308ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:57:12.267098  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-5" with version 43633
I1010 18:57:12.267152  111177 pv_controller.go:800] volume "pv-w-canbind-5" entered phase "Released"
I1010 18:57:12.267168  111177 pv_controller.go:1013] reclaimVolume[pv-w-canbind-5]: policy is Retain, nothing to do
I1010 18:57:12.267198  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-5" with version 43633
I1010 18:57:12.267248  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-5]: phase: Released, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2 (uid: 190ef258-649e-4a72-b31a-f98bab86c0c1)", boundByController: true
I1010 18:57:12.267261  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind-5]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2
I1010 18:57:12.267276  111177 pv_controller.go:549] synchronizing PersistentVolume[pv-w-canbind-5]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2 not found
I1010 18:57:12.267281  111177 pv_controller.go:1013] reclaimVolume[pv-w-canbind-5]: policy is Retain, nothing to do
I1010 18:57:12.267875  111177 pv_controller_base.go:216] volume "pv-w-canbind-2" deleted
I1010 18:57:12.270372  111177 pv_controller_base.go:216] volume "pv-w-canbind-3" deleted
I1010 18:57:12.273025  111177 httplog.go:90] DELETE /api/v1/persistentvolumes: (10.763739ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:12.273078  111177 pv_controller_base.go:216] volume "pv-w-canbind-5" deleted
I1010 18:57:12.273131  111177 pv_controller_base.go:403] deletion of claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-2" was already processed
I1010 18:57:12.281302  111177 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (7.762633ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:12.281598  111177 volume_binding_test.go:191] Running test mix immediate and wait
I1010 18:57:12.284893  111177 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.88877ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:12.291447  111177 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (4.349381ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:12.294508  111177 httplog.go:90] POST /api/v1/persistentvolumes: (2.379081ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:12.295743  111177 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind-4", version 43641
I1010 18:57:12.295852  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Pending, bound to: "", boundByController: false
I1010 18:57:12.295876  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind-4]: volume is unused
I1010 18:57:12.295887  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-4]: set phase Available
I1010 18:57:12.299036  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-4/status: (2.720716ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:57:12.299277  111177 httplog.go:90] POST /api/v1/persistentvolumes: (3.552925ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:12.299556  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 43643
I1010 18:57:12.299625  111177 pv_controller.go:800] volume "pv-w-canbind-4" entered phase "Available"
I1010 18:57:12.300646  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 43643
I1010 18:57:12.300765  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Available, bound to: "", boundByController: false
I1010 18:57:12.300789  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind-4]: volume is unused
I1010 18:57:12.300798  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-4]: set phase Available
I1010 18:57:12.300807  111177 pv_controller.go:782] updating PersistentVolume[pv-w-canbind-4]: phase Available already set
I1010 18:57:12.300833  111177 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-i-canbind-2", version 43644
I1010 18:57:12.300846  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Pending, bound to: "", boundByController: false
I1010 18:57:12.300863  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-i-canbind-2]: volume is unused
I1010 18:57:12.300870  111177 pv_controller.go:779] updating PersistentVolume[pv-i-canbind-2]: set phase Available
I1010 18:57:12.302276  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (2.184432ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:12.303939  111177 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4", version 43646
I1010 18:57:12.303985  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:57:12.304029  111177 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4]: no volume found
I1010 18:57:12.304067  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4] status: set phase Pending
I1010 18:57:12.304085  111177 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4] status: phase Pending already set
I1010 18:57:12.304192  111177 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c", Name:"pvc-w-canbind-4", UID:"305c269c-97b3-4e38-9825-0bee2dc0ab7d", APIVersion:"v1", ResourceVersion:"43646", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1010 18:57:12.304542  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind-2/status: (3.189135ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:57:12.304998  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 43648
I1010 18:57:12.305029  111177 pv_controller.go:800] volume "pv-i-canbind-2" entered phase "Available"
I1010 18:57:12.305835  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (2.94072ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:12.306166  111177 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2", version 43649
I1010 18:57:12.306188  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:57:12.306221  111177 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2]: volume "pv-i-canbind-2" found: phase: Available, bound to: "", boundByController: false
I1010 18:57:12.306233  111177 pv_controller.go:933] binding volume "pv-i-canbind-2" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2"
I1010 18:57:12.306247  111177 pv_controller.go:831] updating PersistentVolume[pv-i-canbind-2]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2"
I1010 18:57:12.306270  111177 pv_controller.go:851] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2" bound to volume "pv-i-canbind-2"
I1010 18:57:12.307665  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 43648
I1010 18:57:12.307692  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Available, bound to: "", boundByController: false
I1010 18:57:12.307707  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-i-canbind-2]: volume is unused
I1010 18:57:12.307714  111177 pv_controller.go:779] updating PersistentVolume[pv-i-canbind-2]: set phase Available
I1010 18:57:12.307720  111177 pv_controller.go:782] updating PersistentVolume[pv-i-canbind-2]: phase Available already set
I1010 18:57:12.308240  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (3.069554ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
I1010 18:57:12.308919  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind-2: (2.020847ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46560]
I1010 18:57:12.309302  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 43651
I1010 18:57:12.309340  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Available, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2 (uid: 40750d1a-48e0-40f3-88f2-09aa8a92d60d)", boundByController: true
I1010 18:57:12.309354  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-i-canbind-2]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2
I1010 18:57:12.309373  111177 pv_controller.go:557] synchronizing PersistentVolume[pv-i-canbind-2]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:57:12.309389  111177 pv_controller.go:605] synchronizing PersistentVolume[pv-i-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
I1010 18:57:12.309423  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 43651
I1010 18:57:12.309443  111177 pv_controller.go:864] updating PersistentVolume[pv-i-canbind-2]: bound to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2"
I1010 18:57:12.309452  111177 pv_controller.go:779] updating PersistentVolume[pv-i-canbind-2]: set phase Bound
I1010 18:57:12.310689  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods: (2.905733ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:12.311331  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-mix-bound
I1010 18:57:12.311358  111177 scheduler.go:598] Attempting to schedule pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-mix-bound
E1010 18:57:12.311669  111177 factory.go:661] Error scheduling volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-mix-bound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I1010 18:57:12.311709  111177 scheduler.go:746] Updating pod condition for volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-mix-bound to (PodScheduled==False, Reason=Unschedulable)
I1010 18:57:12.311977  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind-2/status: (2.147243ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46560]
I1010 18:57:12.312300  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 43653
I1010 18:57:12.312341  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2 (uid: 40750d1a-48e0-40f3-88f2-09aa8a92d60d)", boundByController: true
I1010 18:57:12.312354  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-i-canbind-2]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2
I1010 18:57:12.312372  111177 pv_controller.go:557] synchronizing PersistentVolume[pv-i-canbind-2]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:57:12.312386  111177 pv_controller.go:605] synchronizing PersistentVolume[pv-i-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
I1010 18:57:12.313920  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 43653
I1010 18:57:12.313952  111177 pv_controller.go:800] volume "pv-i-canbind-2" entered phase "Bound"
I1010 18:57:12.313969  111177 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2]: binding to "pv-i-canbind-2"
I1010 18:57:12.313989  111177 pv_controller.go:903] volume "pv-i-canbind-2" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2"
I1010 18:57:12.314098  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (1.136361ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46562]
I1010 18:57:12.314604  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound/status: (2.530299ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39380]
E1010 18:57:12.316081  111177 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I1010 18:57:12.316178  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (3.122238ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46564]
I1010 18:57:12.316179  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-mix-bound
I1010 18:57:12.316206  111177 scheduler.go:598] Attempting to schedule pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-mix-bound
I1010 18:57:12.316336  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-i-canbind-2: (2.128178ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46560]
E1010 18:57:12.316474  111177 factory.go:661] Error scheduling volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-mix-bound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I1010 18:57:12.316504  111177 scheduler.go:746] Updating pod condition for volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-mix-bound to (PodScheduled==False, Reason=Unschedulable)
E1010 18:57:12.316517  111177 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I1010 18:57:12.316631  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2" with version 43656
I1010 18:57:12.316656  111177 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2]: bound to "pv-i-canbind-2"
I1010 18:57:12.316667  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2] status: set phase Bound
I1010 18:57:12.318858  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-i-canbind-2/status: (1.727881ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:12.319307  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2" with version 43657
I1010 18:57:12.319341  111177 pv_controller.go:744] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2" entered phase "Bound"
I1010 18:57:12.319362  111177 pv_controller.go:959] volume "pv-i-canbind-2" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2"
I1010 18:57:12.319389  111177 pv_controller.go:960] volume "pv-i-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2 (uid: 40750d1a-48e0-40f3-88f2-09aa8a92d60d)", boundByController: true
I1010 18:57:12.319406  111177 pv_controller.go:961] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2" status after binding: phase: Bound, bound to: "pv-i-canbind-2", bindCompleted: true, boundByController: true
I1010 18:57:12.319437  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2" with version 43657
I1010 18:57:12.319450  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2]: phase: Bound, bound to: "pv-i-canbind-2", bindCompleted: true, boundByController: true
I1010 18:57:12.319468  111177 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2]: volume "pv-i-canbind-2" found: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2 (uid: 40750d1a-48e0-40f3-88f2-09aa8a92d60d)", boundByController: true
I1010 18:57:12.319478  111177 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2]: claim is already correctly bound
I1010 18:57:12.319488  111177 pv_controller.go:933] binding volume "pv-i-canbind-2" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2"
I1010 18:57:12.319499  111177 pv_controller.go:831] updating PersistentVolume[pv-i-canbind-2]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2"
I1010 18:57:12.319520  111177 pv_controller.go:843] updating PersistentVolume[pv-i-canbind-2]: already bound to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2"
I1010 18:57:12.319530  111177 pv_controller.go:779] updating PersistentVolume[pv-i-canbind-2]: set phase Bound
I1010 18:57:12.319539  111177 pv_controller.go:782] updating PersistentVolume[pv-i-canbind-2]: phase Bound already set
I1010 18:57:12.319548  111177 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2]: binding to "pv-i-canbind-2"
I1010 18:57:12.319568  111177 pv_controller.go:918] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2]: already bound to "pv-i-canbind-2"
I1010 18:57:12.319577  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2] status: set phase Bound
I1010 18:57:12.319595  111177 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2] status: phase Bound already set
I1010 18:57:12.319609  111177 pv_controller.go:959] volume "pv-i-canbind-2" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2"
I1010 18:57:12.319628  111177 pv_controller.go:960] volume "pv-i-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2 (uid: 40750d1a-48e0-40f3-88f2-09aa8a92d60d)", boundByController: true
I1010 18:57:12.319643  111177 pv_controller.go:961] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2" status after binding: phase: Bound, bound to: "pv-i-canbind-2", bindCompleted: true, boundByController: true
I1010 18:57:12.319881  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (3.008304ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44908]
I1010 18:57:12.321167  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (4.351083ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46562]
I1010 18:57:12.418758  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (2.206762ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:12.519133  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (2.326558ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:12.619300  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (2.708645ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:12.719685  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (3.045217ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:12.819217  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (2.647866ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:12.918490  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (2.140568ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:13.018631  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (2.263085ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:13.119133  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (2.620675ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:13.219398  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (2.909568ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:13.320144  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (3.565877ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:13.419116  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (2.569288ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:13.519529  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (2.911642ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:13.619521  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (2.911418ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:13.623714  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-mix-bound
I1010 18:57:13.623775  111177 scheduler.go:598] Attempting to schedule pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-mix-bound
I1010 18:57:13.624138  111177 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-mix-bound" match with Node "node-1"
I1010 18:57:13.624139  111177 scheduler_binder.go:653] PersistentVolume "pv-i-canbind-2", Node "node-2" mismatch for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-mix-bound": No matching NodeSelectorTerms
I1010 18:57:13.624207  111177 scheduler_binder.go:699] Found matching volumes for pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-mix-bound" on node "node-1"
I1010 18:57:13.624206  111177 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-mix-bound", PVC "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4" on node "node-2"
I1010 18:57:13.624227  111177 scheduler_binder.go:725] storage class "wait-45bl" of claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4" does not support dynamic provisioning
I1010 18:57:13.624337  111177 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-mix-bound", node "node-1"
I1010 18:57:13.624402  111177 scheduler_assume_cache.go:323] Assumed v1.PersistentVolume "pv-w-canbind-4", version 43643
I1010 18:57:13.624483  111177 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-mix-bound", node "node-1"
I1010 18:57:13.624497  111177 scheduler_binder.go:404] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4" bound to volume "pv-w-canbind-4"
I1010 18:57:13.629533  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-4: (3.805936ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:13.629919  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 43685
I1010 18:57:13.629937  111177 scheduler_binder.go:410] updating PersistentVolume[pv-w-canbind-4]: bound to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4"
I1010 18:57:13.629972  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Available, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4 (uid: 305c269c-97b3-4e38-9825-0bee2dc0ab7d)", boundByController: true
I1010 18:57:13.629985  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4
I1010 18:57:13.630006  111177 pv_controller.go:557] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:57:13.630019  111177 pv_controller.go:605] synchronizing PersistentVolume[pv-w-canbind-4]: volume not bound yet, waiting for syncClaim to fix it
I1010 18:57:13.630070  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4" with version 43646
I1010 18:57:13.630086  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:57:13.630147  111177 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4]: volume "pv-w-canbind-4" found: phase: Available, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4 (uid: 305c269c-97b3-4e38-9825-0bee2dc0ab7d)", boundByController: true
I1010 18:57:13.630164  111177 pv_controller.go:933] binding volume "pv-w-canbind-4" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4"
I1010 18:57:13.630178  111177 pv_controller.go:831] updating PersistentVolume[pv-w-canbind-4]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4"
I1010 18:57:13.630196  111177 pv_controller.go:843] updating PersistentVolume[pv-w-canbind-4]: already bound to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4"
I1010 18:57:13.630207  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-4]: set phase Bound
I1010 18:57:13.632452  111177 cache.go:669] Couldn't expire cache for pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-mix-bound. Binding is still in progress.
I1010 18:57:13.633365  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-4/status: (2.833401ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:13.633938  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 43686
I1010 18:57:13.633979  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4 (uid: 305c269c-97b3-4e38-9825-0bee2dc0ab7d)", boundByController: true
I1010 18:57:13.633991  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4
I1010 18:57:13.634013  111177 pv_controller.go:557] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:57:13.634028  111177 pv_controller.go:605] synchronizing PersistentVolume[pv-w-canbind-4]: volume not bound yet, waiting for syncClaim to fix it
I1010 18:57:13.634060  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 43686
I1010 18:57:13.634087  111177 pv_controller.go:800] volume "pv-w-canbind-4" entered phase "Bound"
I1010 18:57:13.634102  111177 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4]: binding to "pv-w-canbind-4"
I1010 18:57:13.634122  111177 pv_controller.go:903] volume "pv-w-canbind-4" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4"
I1010 18:57:13.637852  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-w-canbind-4: (2.616613ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:13.638240  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4" with version 43687
I1010 18:57:13.638277  111177 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4]: bound to "pv-w-canbind-4"
I1010 18:57:13.638286  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4] status: set phase Bound
I1010 18:57:13.640521  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-w-canbind-4/status: (2.013077ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:13.641312  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4" with version 43688
I1010 18:57:13.641346  111177 pv_controller.go:744] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4" entered phase "Bound"
I1010 18:57:13.641361  111177 pv_controller.go:959] volume "pv-w-canbind-4" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4"
I1010 18:57:13.641385  111177 pv_controller.go:960] volume "pv-w-canbind-4" status after binding: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4 (uid: 305c269c-97b3-4e38-9825-0bee2dc0ab7d)", boundByController: true
I1010 18:57:13.641399  111177 pv_controller.go:961] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4" status after binding: phase: Bound, bound to: "pv-w-canbind-4", bindCompleted: true, boundByController: true
I1010 18:57:13.641431  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4" with version 43688
I1010 18:57:13.641441  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4]: phase: Bound, bound to: "pv-w-canbind-4", bindCompleted: true, boundByController: true
I1010 18:57:13.641456  111177 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4]: volume "pv-w-canbind-4" found: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4 (uid: 305c269c-97b3-4e38-9825-0bee2dc0ab7d)", boundByController: true
I1010 18:57:13.641475  111177 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4]: claim is already correctly bound
I1010 18:57:13.641488  111177 pv_controller.go:933] binding volume "pv-w-canbind-4" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4"
I1010 18:57:13.641499  111177 pv_controller.go:831] updating PersistentVolume[pv-w-canbind-4]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4"
I1010 18:57:13.641522  111177 pv_controller.go:843] updating PersistentVolume[pv-w-canbind-4]: already bound to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4"
I1010 18:57:13.641533  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-4]: set phase Bound
I1010 18:57:13.641540  111177 pv_controller.go:782] updating PersistentVolume[pv-w-canbind-4]: phase Bound already set
I1010 18:57:13.641549  111177 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4]: binding to "pv-w-canbind-4"
I1010 18:57:13.641569  111177 pv_controller.go:918] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4]: already bound to "pv-w-canbind-4"
I1010 18:57:13.641580  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4] status: set phase Bound
I1010 18:57:13.641609  111177 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4] status: phase Bound already set
I1010 18:57:13.641628  111177 pv_controller.go:959] volume "pv-w-canbind-4" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4"
I1010 18:57:13.641646  111177 pv_controller.go:960] volume "pv-w-canbind-4" status after binding: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4 (uid: 305c269c-97b3-4e38-9825-0bee2dc0ab7d)", boundByController: true
I1010 18:57:13.641666  111177 pv_controller.go:961] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4" status after binding: phase: Bound, bound to: "pv-w-canbind-4", bindCompleted: true, boundByController: true
I1010 18:57:13.718645  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (2.1575ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:13.818483  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (2.102206ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:13.919482  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (3.120464ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.018499  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (2.049114ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.118689  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (2.298532ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.218984  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (2.598665ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.319172  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (2.476212ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.418938  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (2.426925ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.522238  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (3.508815ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.620321  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (3.917018ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.630385  111177 scheduler_binder.go:553] All PVCs for pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-mix-bound" are bound
I1010 18:57:14.630483  111177 factory.go:710] Attempting to bind pod-mix-bound to node-1
I1010 18:57:14.632656  111177 cache.go:669] Couldn't expire cache for pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-mix-bound. Binding is still in progress.
I1010 18:57:14.635330  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound/binding: (4.12762ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.635890  111177 scheduler.go:730] pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-mix-bound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I1010 18:57:14.640322  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (3.238421ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.718560  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-mix-bound: (2.073447ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.721039  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-w-canbind-4: (1.814232ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.723119  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-i-canbind-2: (1.660449ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.725350  111177 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind-4: (1.874672ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.727690  111177 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-canbind-2: (1.611054ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.741294  111177 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods: (12.942405ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.752004  111177 pv_controller_base.go:265] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2" deleted
I1010 18:57:14.752066  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 43653
I1010 18:57:14.752119  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2 (uid: 40750d1a-48e0-40f3-88f2-09aa8a92d60d)", boundByController: true
I1010 18:57:14.752136  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-i-canbind-2]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2
I1010 18:57:14.753838  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-i-canbind-2: (1.37242ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46564]
I1010 18:57:14.754189  111177 pv_controller.go:549] synchronizing PersistentVolume[pv-i-canbind-2]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2 not found
I1010 18:57:14.754222  111177 pv_controller.go:577] volume "pv-i-canbind-2" is released and reclaim policy "Retain" will be executed
I1010 18:57:14.754237  111177 pv_controller.go:779] updating PersistentVolume[pv-i-canbind-2]: set phase Released
I1010 18:57:14.756651  111177 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (14.453129ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.757029  111177 pv_controller_base.go:265] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4" deleted
I1010 18:57:14.760569  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind-2/status: (5.951623ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46564]
I1010 18:57:14.761048  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 43773
I1010 18:57:14.761090  111177 pv_controller.go:800] volume "pv-i-canbind-2" entered phase "Released"
I1010 18:57:14.761104  111177 pv_controller.go:1013] reclaimVolume[pv-i-canbind-2]: policy is Retain, nothing to do
I1010 18:57:14.761693  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 43686
I1010 18:57:14.761779  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4 (uid: 305c269c-97b3-4e38-9825-0bee2dc0ab7d)", boundByController: true
I1010 18:57:14.761795  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4
I1010 18:57:14.764159  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-w-canbind-4: (2.10348ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46564]
I1010 18:57:14.764607  111177 pv_controller.go:549] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4 not found
I1010 18:57:14.764635  111177 pv_controller.go:577] volume "pv-w-canbind-4" is released and reclaim policy "Retain" will be executed
I1010 18:57:14.764647  111177 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-4]: set phase Released
I1010 18:57:14.768340  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-4/status: (3.321371ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46564]
I1010 18:57:14.769266  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 43777
I1010 18:57:14.769304  111177 pv_controller.go:800] volume "pv-w-canbind-4" entered phase "Released"
I1010 18:57:14.769316  111177 pv_controller.go:1013] reclaimVolume[pv-w-canbind-4]: policy is Retain, nothing to do
I1010 18:57:14.769348  111177 pv_controller_base.go:216] volume "pv-i-canbind-2" deleted
I1010 18:57:14.769514  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 43777
I1010 18:57:14.769603  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Released, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4 (uid: 305c269c-97b3-4e38-9825-0bee2dc0ab7d)", boundByController: true
I1010 18:57:14.769626  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4
I1010 18:57:14.769649  111177 pv_controller.go:549] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4 not found
I1010 18:57:14.769668  111177 pv_controller.go:1013] reclaimVolume[pv-w-canbind-4]: policy is Retain, nothing to do
I1010 18:57:14.769697  111177 pv_controller_base.go:403] deletion of claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind-2" was already processed
I1010 18:57:14.771519  111177 httplog.go:90] DELETE /api/v1/persistentvolumes: (14.044849ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.773477  111177 pv_controller_base.go:216] volume "pv-w-canbind-4" deleted
I1010 18:57:14.773525  111177 pv_controller_base.go:403] deletion of claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-4" was already processed
I1010 18:57:14.782360  111177 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (8.766106ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.783306  111177 volume_binding_test.go:191] Running test immediate can bind
I1010 18:57:14.786038  111177 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.202304ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.788611  111177 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.895514ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.792298  111177 httplog.go:90] POST /api/v1/persistentvolumes: (2.874084ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.792904  111177 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-i-canbind", version 43785
I1010 18:57:14.793087  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-i-canbind]: phase: Pending, bound to: "", boundByController: false
I1010 18:57:14.793243  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-i-canbind]: volume is unused
I1010 18:57:14.793326  111177 pv_controller.go:779] updating PersistentVolume[pv-i-canbind]: set phase Available
I1010 18:57:14.795979  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind/status: (2.195906ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46564]
I1010 18:57:14.796040  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (3.010963ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.796277  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 43787
I1010 18:57:14.796318  111177 pv_controller.go:800] volume "pv-i-canbind" entered phase "Available"
I1010 18:57:14.796522  111177 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind", version 43786
I1010 18:57:14.796562  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:57:14.796599  111177 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind]: volume "pv-i-canbind" found: phase: Available, bound to: "", boundByController: false
I1010 18:57:14.796622  111177 pv_controller.go:933] binding volume "pv-i-canbind" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind"
I1010 18:57:14.796627  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 43787
I1010 18:57:14.796636  111177 pv_controller.go:831] updating PersistentVolume[pv-i-canbind]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind"
I1010 18:57:14.796656  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-i-canbind]: phase: Available, bound to: "", boundByController: false
I1010 18:57:14.796664  111177 pv_controller.go:851] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind" bound to volume "pv-i-canbind"
I1010 18:57:14.796678  111177 pv_controller.go:496] synchronizing PersistentVolume[pv-i-canbind]: volume is unused
I1010 18:57:14.796688  111177 pv_controller.go:779] updating PersistentVolume[pv-i-canbind]: set phase Available
I1010 18:57:14.796698  111177 pv_controller.go:782] updating PersistentVolume[pv-i-canbind]: phase Available already set
I1010 18:57:14.799835  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods: (3.164805ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.800238  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind: (3.14137ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46564]
I1010 18:57:14.800469  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 43789
I1010 18:57:14.800505  111177 pv_controller.go:864] updating PersistentVolume[pv-i-canbind]: bound to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind"
I1010 18:57:14.800519  111177 pv_controller.go:779] updating PersistentVolume[pv-i-canbind]: set phase Bound
I1010 18:57:14.800667  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 43789
I1010 18:57:14.800819  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-i-canbind]: phase: Available, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind (uid: 8c724b0c-4523-4610-9ad7-2286851226c4)", boundByController: true
I1010 18:57:14.800915  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind
I1010 18:57:14.801065  111177 pv_controller.go:557] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:57:14.801169  111177 pv_controller.go:605] synchronizing PersistentVolume[pv-i-canbind]: volume not bound yet, waiting for syncClaim to fix it
I1010 18:57:14.802164  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-canbind
I1010 18:57:14.802200  111177 scheduler.go:598] Attempting to schedule pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-canbind
E1010 18:57:14.802820  111177 factory.go:661] Error scheduling volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-canbind: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I1010 18:57:14.803016  111177 scheduler.go:746] Updating pod condition for volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-canbind to (PodScheduled==False, Reason=Unschedulable)
I1010 18:57:14.803926  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 43790
I1010 18:57:14.803989  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-i-canbind]: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind (uid: 8c724b0c-4523-4610-9ad7-2286851226c4)", boundByController: true
I1010 18:57:14.804006  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind
I1010 18:57:14.804026  111177 pv_controller.go:557] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 18:57:14.804040  111177 pv_controller.go:605] synchronizing PersistentVolume[pv-i-canbind]: volume not bound yet, waiting for syncClaim to fix it
I1010 18:57:14.804108  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind/status: (3.233927ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46564]
I1010 18:57:14.804510  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 43790
I1010 18:57:14.804543  111177 pv_controller.go:800] volume "pv-i-canbind" entered phase "Bound"
I1010 18:57:14.804559  111177 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind]: binding to "pv-i-canbind"
I1010 18:57:14.804578  111177 pv_controller.go:903] volume "pv-i-canbind" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind"
I1010 18:57:14.806352  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-canbind: (1.96147ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47188]
I1010 18:57:14.806975  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-i-canbind: (2.081308ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47190]
I1010 18:57:14.806370  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-canbind/status: (2.814405ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46566]
I1010 18:57:14.807271  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind" with version 43793
I1010 18:57:14.807311  111177 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind]: bound to "pv-i-canbind"
I1010 18:57:14.807325  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind] status: set phase Bound
E1010 18:57:14.807647  111177 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I1010 18:57:14.808103  111177 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-canbind
I1010 18:57:14.808126  111177 scheduler.go:598] Attempting to schedule pod: volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-canbind
I1010 18:57:14.808251  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (3.686753ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46564]
I1010 18:57:14.808415  111177 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-canbind" match with Node "node-1"
I1010 18:57:14.808578  111177 scheduler_binder.go:653] PersistentVolume "pv-i-canbind", Node "node-2" mismatch for Pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-canbind": No matching NodeSelectorTerms
I1010 18:57:14.808744  111177 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-canbind", node "node-1"
I1010 18:57:14.808798  111177 scheduler_binder.go:267] AssumePodVolumes for pod "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-canbind", node "node-1": all PVCs bound and nothing to do
I1010 18:57:14.808977  111177 factory.go:710] Attempting to bind pod-i-canbind to node-1
I1010 18:57:14.809955  111177 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-i-canbind/status: (2.414766ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47190]
I1010 18:57:14.810314  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind" with version 43795
I1010 18:57:14.810405  111177 pv_controller.go:744] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind" entered phase "Bound"
I1010 18:57:14.810445  111177 pv_controller.go:959] volume "pv-i-canbind" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind"
I1010 18:57:14.810472  111177 pv_controller.go:960] volume "pv-i-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind (uid: 8c724b0c-4523-4610-9ad7-2286851226c4)", boundByController: true
I1010 18:57:14.810485  111177 pv_controller.go:961] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind" status after binding: phase: Bound, bound to: "pv-i-canbind", bindCompleted: true, boundByController: true
I1010 18:57:14.810516  111177 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind" with version 43795
I1010 18:57:14.810526  111177 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind]: phase: Bound, bound to: "pv-i-canbind", bindCompleted: true, boundByController: true
I1010 18:57:14.810539  111177 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind]: volume "pv-i-canbind" found: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind (uid: 8c724b0c-4523-4610-9ad7-2286851226c4)", boundByController: true
I1010 18:57:14.810546  111177 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind]: claim is already correctly bound
I1010 18:57:14.810556  111177 pv_controller.go:933] binding volume "pv-i-canbind" to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind"
I1010 18:57:14.810565  111177 pv_controller.go:831] updating PersistentVolume[pv-i-canbind]: binding to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind"
I1010 18:57:14.810581  111177 pv_controller.go:843] updating PersistentVolume[pv-i-canbind]: already bound to "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind"
I1010 18:57:14.810589  111177 pv_controller.go:779] updating PersistentVolume[pv-i-canbind]: set phase Bound
I1010 18:57:14.810599  111177 pv_controller.go:782] updating PersistentVolume[pv-i-canbind]: phase Bound already set
I1010 18:57:14.810649  111177 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind]: binding to "pv-i-canbind"
I1010 18:57:14.810677  111177 pv_controller.go:918] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind]: already bound to "pv-i-canbind"
I1010 18:57:14.810686  111177 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind] status: set phase Bound
I1010 18:57:14.810817  111177 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind] status: phase Bound already set
I1010 18:57:14.810839  111177 pv_controller.go:959] volume "pv-i-canbind" bound to claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind"
I1010 18:57:14.810914  111177 pv_controller.go:960] volume "pv-i-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind (uid: 8c724b0c-4523-4610-9ad7-2286851226c4)", boundByController: true
I1010 18:57:14.810936  111177 pv_controller.go:961] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind" status after binding: phase: Bound, bound to: "pv-i-canbind", bindCompleted: true, boundByController: true
I1010 18:57:14.811277  111177 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-canbind/binding: (1.975994ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46564]
I1010 18:57:14.811827  111177 scheduler.go:730] pod volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-i-canbind is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I1010 18:57:14.814236  111177 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/events: (2.037219ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47190]
I1010 18:57:14.905592  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods/pod-i-canbind: (4.503486ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47190]
I1010 18:57:14.909466  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-i-canbind: (2.52689ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47190]
I1010 18:57:14.912328  111177 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-canbind: (2.149457ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47190]
I1010 18:57:14.922429  111177 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods: (9.314212ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47190]
I1010 18:57:14.929113  111177 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (5.594003ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47190]
I1010 18:57:14.929326  111177 pv_controller_base.go:265] claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind" deleted
I1010 18:57:14.929382  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 43790
I1010 18:57:14.929424  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-i-canbind]: phase: Bound, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind (uid: 8c724b0c-4523-4610-9ad7-2286851226c4)", boundByController: true
I1010 18:57:14.929438  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind
I1010 18:57:14.931161  111177 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims/pvc-i-canbind: (1.498316ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47188]
I1010 18:57:14.931406  111177 pv_controller.go:549] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind not found
I1010 18:57:14.931433  111177 pv_controller.go:577] volume "pv-i-canbind" is released and reclaim policy "Retain" will be executed
I1010 18:57:14.931447  111177 pv_controller.go:779] updating PersistentVolume[pv-i-canbind]: set phase Released
I1010 18:57:14.933992  111177 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind/status: (2.199331ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47188]
I1010 18:57:14.934357  111177 store.go:231] deletion of /e363539f-4a5e-4e74-9ddc-5eb895b1e875/persistentvolumes/pv-i-canbind failed because of a conflict, going to retry
I1010 18:57:14.934678  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 43804
I1010 18:57:14.934705  111177 pv_controller.go:800] volume "pv-i-canbind" entered phase "Released"
I1010 18:57:14.934718  111177 pv_controller.go:1013] reclaimVolume[pv-i-canbind]: policy is Retain, nothing to do
I1010 18:57:14.934765  111177 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 43804
I1010 18:57:14.934791  111177 pv_controller.go:491] synchronizing PersistentVolume[pv-i-canbind]: phase: Released, bound to: "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind (uid: 8c724b0c-4523-4610-9ad7-2286851226c4)", boundByController: true
I1010 18:57:14.934805  111177 pv_controller.go:516] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind
I1010 18:57:14.934825  111177 pv_controller.go:549] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind not found
I1010 18:57:14.934833  111177 pv_controller.go:1013] reclaimVolume[pv-i-canbind]: policy is Retain, nothing to do
I1010 18:57:14.935537  111177 httplog.go:90] DELETE /api/v1/persistentvolumes: (5.940512ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47190]
I1010 18:57:14.936639  111177 pv_controller_base.go:216] volume "pv-i-canbind" deleted
I1010 18:57:14.936679  111177 pv_controller_base.go:403] deletion of claim "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-i-canbind" was already processed
I1010 18:57:14.942052  111177 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (6.127527ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47190]
I1010 18:57:14.942221  111177 volume_binding_test.go:920] test cluster "volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c" start to tear down
I1010 18:57:14.943593  111177 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pods: (1.202765ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47190]
I1010 18:57:14.945659  111177 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/persistentvolumeclaims: (1.29226ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47190]
I1010 18:57:14.947401  111177 httplog.go:90] DELETE /api/v1/persistentvolumes: (1.18629ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47190]
I1010 18:57:14.948803  111177 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (1.075208ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47190]
I1010 18:57:14.949415  111177 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=32528&timeout=7m2s&timeoutSeconds=422&watch=true: (1m18.308825275s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34988]
I1010 18:57:14.950146  111177 pv_controller_base.go:305] Shutting down persistent volume controller
I1010 18:57:14.950171  111177 pv_controller_base.go:416] claim worker queue shutting down
I1010 18:57:14.950186  111177 pv_controller_base.go:359] volume worker queue shutting down
I1010 18:57:14.950951  111177 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=32528&timeout=6m22s&timeoutSeconds=382&watch=true: (1m18.334356607s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34986]
I1010 18:57:14.951295  111177 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=32528&timeout=9m19s&timeoutSeconds=559&watch=true: (1m18.224485232s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35014]
I1010 18:57:14.951329  111177 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=32528&timeout=9m32s&timeoutSeconds=572&watch=true: (1m18.315074382s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34992]
I1010 18:57:14.951475  111177 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=32528&timeout=7m15s&timeoutSeconds=435&watch=true: (1m18.310143054s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34872]
I1010 18:57:14.951646  111177 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32528&timeout=9m43s&timeoutSeconds=583&watch=true: (1m18.309551934s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34994]
I1010 18:57:14.951781  111177 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=32528&timeout=5m13s&timeoutSeconds=313&watch=true: (1m18.311591707s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34990]
I1010 18:57:14.951940  111177 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=32528&timeout=7m29s&timeoutSeconds=449&watch=true: (1m18.310895639s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34996]
I1010 18:57:14.952061  111177 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=32528&timeout=8m45s&timeoutSeconds=525&watch=true: (1m18.301455421s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35002]
I1010 18:57:14.952202  111177 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=32528&timeout=7m17s&timeoutSeconds=437&watch=true: (1m18.225034585s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35008]
I1010 18:57:14.952353  111177 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=32903&timeout=9m16s&timeoutSeconds=556&watch=true: (1m18.343452006s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I1010 18:57:14.952415  111177 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=32528&timeout=6m26s&timeoutSeconds=386&watch=true: (1m18.30987104s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34998]
I1010 18:57:14.952522  111177 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=32528&timeout=7m53s&timeoutSeconds=473&watch=true: (1m18.223351155s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35010]
I1010 18:57:14.952579  111177 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=32528&timeout=7m24s&timeoutSeconds=444&watch=true: (1m18.222746252s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35004]
I1010 18:57:14.952625  111177 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=32528&timeout=9m52s&timeoutSeconds=592&watch=true: (1m18.309518217s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35000]
I1010 18:57:14.952669  111177 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32528&timeout=7m45s&timeoutSeconds=465&watch=true: (1m18.22025289s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35012]
I1010 18:57:14.960630  111177 httplog.go:90] DELETE /api/v1/nodes: (11.385921ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47190]
I1010 18:57:14.961699  111177 controller.go:185] Shutting down kubernetes service endpoint reconciler
I1010 18:57:14.964458  111177 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.09856ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47190]
I1010 18:57:14.969358  111177 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (2.738724ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47190]
--- FAIL: TestVolumeBinding (82.10s)
    volume_binding_test.go:243: Failed to schedule Pod "pod-w-canbind-2": timed out waiting for the condition
    volume_binding_test.go:1131: PVC volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pvc-w-canbind-3 phase not Bound, got Pending
    volume_binding_test.go:1179: PV pv-w-canbind-2 phase not Bound, got Available
    volume_binding_test.go:1179: PV pv-w-canbind-3 phase not Bound, got Available
    volume_binding_test.go:1179: PV pv-w-canbind-5 phase not Available, got Bound

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20191010-184632.xml

Find volume-scheduling-823be1af-8283-4928-b221-7925c610ca9c/pod-w-canbind mentions in log files | View test history on testgrid


Show 2898 Passed Tests